CedarBackup2-2.22.0/0002775000175000017500000000000012143054372015550 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/Changelog0000664000175000017500000010047612143054130017360 0ustar pronovicpronovic00000000000000Version 2.22.0 09 May 2013 * Add eject-related kludges to work around observed behavior. * New config option eject_delay, to slow down open/close * Unlock tray with 'eject -i off' to handle potential problems Version 2.21.1 21 Mar 2013 * Apply patches provided by Jan Medlock as Debian bugs. * Fix typo in manpage (showed -s instead of -D) * Support output from latest /usr/bin/split (' vs. `) Version 2.21.0 12 Oct 2011 * Update CREDITS file to consistently credit all contributers. * Minor tweaks based on PyLint analysis (mostly config changes). * Make ISO image unit tests more robust in writersutiltests.py. - Handle failures with unmount (wait 1 second and try again) - Programmatically disable (and re-enable) the GNOME auto-mounter * Implement configurable recursion for collect action. - Update collect.py to handle recursion (patch by Zoran Bosnjak) - Add new configuration item CollectDir.recursionLevel - Update user manual to discuss new functionality Version 2.20.1 19 Oct 2010 * Fix minor formatting issues in manpages, pointed out by Debian lintian. * Changes required to make code compatible with Python 2.7 - StreamHandler no longer accepts strm= argument (closes: #3079930) - Modify logfile os.fdopen() to be explicit about read/write mode - Fix tests that extract a tarfile twice (exposed by new error behavior) Version 2.20.0 07 Jul 2010 * This is a cleanup release with no functional changes. * Switch to minimum Python version of 2.5 (everyone should have it now). - Make cback script more robust in the case of a bad interpreter version - Change file headers, comments, manual, etc. to reference Python 2.5 - Convert to use @staticmethod rather than x = staticmethod(x) - Change interpreter checks in test.py, cli.py and span.py - Remove Python 2.3-compatible versions of util.nullDevice() and util.Pipe * Configure pylint and execute it against the entire codebase. - Fix a variety of minor warnings and suggestions from pylint - Move unit tests into testcase folder to avoid test.py naming conflict * Remove "Translate [x:y] into [a:b]" debug message for uid/gid translation. * Refactor out util.isRunningAsRoot() to replace scattered os.getuid() calls. * Remove boilerplate comments "As with all of the ... " in config code. * Refactor checkUnique() and parseCommaSeparatedString() from config to util. * Add note in manual about intermittent problems with DVD writer soft links. Version 2.19.6 22 May 2010 * Work around strange stderr file descriptor bugs discovered on Cygwin. * Tweak expected results for tests that fail on Cygwin with Python 2.5.x. * Set up command overrides properly so full test suite works on Debian. * Add refresh_media_delay configuration option and related functionality. Version 2.19.5 10 Jan 2010 * Add customization support, so Debian can use wodim and genisoimage. * SF bug #2929447 - fix cback-span to only ask for media when needed * SF bug #2929446 - add retry logic for writes in cback-span Version 2.19.4 16 Aug 2009 * Add support for the Python 2.6 interpreter. - Use hashlib instead of deprecated sha module when available - Use set type rather than deprecated sets.Set when available - Use tarfile.format rather than deprecated tarfile.posix when available - Fix testGenerateTarfile_002() so expectations match Python 2.6 results Version 2.19.3 29 Mar 2009 * Fix minor epydoc typos, mostly in @sort directives. * Removed support for user manual PDF format (see doc/pdf). Version 2.19.2 08 Dec 2008 * Fix cback-span problem when writing store indicators. Version 2.19.1 15 Nov 2008 * Fix bug when logging strange filenames. Version 2.19.0 05 Oct 2008 * Fix a few typos in the CREDITS file. * Update README to properly reference SourceForge site. * Add option to peer configuration. Version 2.18.0 05 May 2008 * Add the ability to dereference links when following them. - Add util.dereferenceLink() function - Add dereference flag to FilesystemList.addDirContents() - Add CollectDir.dereference attribute - Modify collect action to obey CollectDir.dereference - Update user manual to discuss new attribute Version 2.17.1 26 Apr 2008 * Updated copyright statement slightly. * Updated user manual. - Brought copyright notices up-to-date - Fixed various URLs that didn't reference SourceForge * Fixed problem with link_depth (closes: #1930729). - Can't add links directly, they're implicitly added later by tar - Changed FilesystemList to use includePath=false for recursive links Version 2.17.0 20 Mar 2008 * Change suggested execution index for Capacity extension in manual. * Provide support for application-wide diagnostic reporting. - Add util.Diagnostics class to encapsulate information - Log diagnostics when Cedar Backup first starts - Print diagnostics when running unit tests - Add a new --diagnostics command-line option * Clean up filesystem code that deals with file age, and improve unit tests. - Some platforms apparently cannot set file ages precisely - Change calculateFileAge() to use floats throughout, which is safer - Change removeYoungFiles() to explicitly check on whole days - Put a 1-second fudge factor into unit tests when setting file ages * Fix some unit test failures discovered on Windows XP. - Fix utiltests.TestFunctions.testNullDevice_001() - Fix filesystemtests.TestBackupFileList.testGenerateFitted_004() - Fix typo in filesystemtests.TestFilesystemList.testRemoveLinks_002() Version 2.16.0 18 Mar 2008 * Make name attribute optional in RemotePeer constructor. * Add support for collecting soft links (closes: #1854631). - Add linkDepth parameter to FilesystemList.addDirContents() - Add CollectDir.linkDepth attribute - Modify collect action to obey CollectDir.linkDepth - Update user manual to discuss new attribute - Document "link farm" option for collect configuration * Implement a capacity-checking extension (closes: #1915496). - Add new extension in CedarBackup2/extend/capacity.py - Refactor ByteQuantity out of split.py and into config.py - Add total capacity and utilization to MediaCapacity classes - Update user manual to discuss new extension Version 2.15.3 16 Mar 2008 * Fix testEncodePath_009() to be aware of "UTF-8" encoding. * Fix typos in the PostgreSQL extension section of the manual. * Improve logging when stage action fails (closes: #1854635). * Fix stage action so it works for local users (closes: #1854634). Version 2.15.2 07 Feb 2008 * Updated copyright statements now that code changed in year 2008. * Fix two unit test failures when using Python 2.5 (SF #1861878). - Add new function testtutil.hexFloatLiteralAllowed() - Fix splittests.TestByteQuantity.testConstructor_004() for 0xAC - Fix configtests.TestBlankBehavior.testConstructor_006() for 0xAC Version 2.15.1 19 Dec 2007 * Improve error reporting for managed client action failures. * Make sure that managed client failure does not kill entire backup. * Add appendix "Securing Password-less SSH Connection" to user manual. Version 2.15.0 18 Dec 2007 * Minor documentation tweaks discovered during 3.0 development. * Add support for a new managed backup feature. - Add a new configuration section (PeersConfig) - Change peers configuration in to just override - Modify stage process to take peers list from peers section (if available) - Add new configuration in options and remote peers to support remote shells - Update user manual to discuss managed backup concept and configuration - Add executeRemoteCommand() and executeManagedAction() on peer.RemotePeer Version 2.14.0 19 Sep 2007 * Deal properly with programs that localize their output. - Create new util.sanitizeEnvironment() function to set $LANG=C - Call new sanitizeEnvironment() function inside util.executeCommand() - Change extend/split._splitFile() to be more verbose about problems - Update Extension Architecture Interface to mandate $LANG=C - Add split unit tests to catch any locale-related regressions - Thanks to Lukasz Nowak for initial debugging in split extension Version 2.13.2 10 Jul 2007 * Tweak some docstring markup to work with Epydoc beta 1. * Apply documentation patch from Lukasz K. Nowak. - Document that mysql extension can back up remote databases - Fix typos in extend/sysinfo.py * Clean up some configuration error messages to be clearer. - Make sure that reported errors always include enough information - Add a prefix argument to some of the specialized lists in util.py * Catch invalid regular expressions in config and filesystem code. - Add new util.RegexList list to contain only valid regexes - Use RegexList in config.ConfigDir and config.CollectConfig - Use RegexList in subversion.RepositoryDir and mbox.MboxDir - Throw ValueError on bad regex in FilesystemList remove() methods - Use RegexList in FilesystemList for all lists of patterns Version 2.13.1 29 Mar 2007 * Fix ongoing problems re-initializing previously-written DVDs - Even with -Z, growisofs sometimes wouldn't overwrite DVDs - It turns out that this ONLY happens from cron, not from a terminal - The solution is to use the undocumented option -use-the-force-luke=tty - Also corrected dvdwriter to use option "-dry-run" not "--dry-run" Version 2.13.0 25 Mar 2007 * Change writeIndicator() to raise exception on failure (closes #53). * Change buildNormalizedPath() for leading "." so files won't be hidden * Remove bogus usage of tempfile.NamedTemporaryFile in remote peer. * Refactored some common action code into CedarBackup2.actions.util. * Add unit tests for a variety of basic utility functions (closes: #45). - Error-handling was improved in some utility methods - Fundamentally, behavior should be unchanged * Reimplement DVD capacity calculation (initial code from Dmitry Rutsky). - This is now done using a growisofs dry run, without -Z - The old dvd+rw-mediainfo method was unreliable on some systems - Error-handling behavior on CdWriter was also tweaked for consistency * Add code to check media before writing to it (closes: #5). - Create new check_media store configuration option - Implement new initialize action to initialize rewritable media - Media is initialized by writing an initial session with media label - The store action now always writes a media label as well - Update user manual to discuss the new behavior - Add unit tests for new configuration * Implement an optimized media blanking strategy (closes: #48). - When used, Cedar Backup will only blank media when it runs out of space - Initial implementation and manual text provided by Dmitry Rutsky - Add new blanking_behavior store configuration options - Update user manual to document options and discuss usage - Add unit tests for new configuration Version 2.12.1 26 Feb 2007 * Fix typo in new split section in the user manual. * Fix incorrect call to new writeIndicatorFile() function in stage action. * Add notes in manual on how to find gpg and split commands. Version 2.12.0 23 Feb 2007 * Fix some encrypt unit tests related to config validation * Make util.PathResolverSingleton a new-style class (i.e. inherit from object) * Modify util.changeOwnership() to be a no-op for None user or group * Created new split extension to split large staged files. - Refactored common action utility code into actions/util.py. - Update standard actions, cback-span, and encrypt to use refactored code - Updated user manual to document the new extension and restore process. Version 2.11.0 21 Feb 2007 * Fix log message about SCSI id in writers/dvdwriter.py. * Remove TODO from public distribution (use Bugzilla instead). * Minor changes to mbox functionality (refactoring, test cleanup). * Fix bug in knapsack implementation, masked by poor test suite. * Fix filesystem unit tests that had typos in them and wouldn't work * Reorg user manual to move command-line tools to own chapter (closes: #33) * Add validation for duplicate peer and extension names (closes: #37, #38). * Implement new cback-span command-line tool (closes: #51). - Create new util/cback-span script and CedarBackup2.tools package - Implement guts of script in CedarBackup2/tools/span.py - Add new BackupFileList.generateSpan() method and tests - Refactor other util and filesystem code to make things work - Add new section in user manual to discuss new command * Rework validation requiring least one item to collect (closes: #34). - This is no longer a validation error at the configuration level - Instead, the collect action itself will enforce the rule when it is run * Support a flag in store configuration (closes: #39). - Change StoreConfig, CdWriter and DvdWriter to accept new flag - Update user manual to document new flag, along with warnings about it * Support repository directories in Subversion extension (closes: #46). - Add configuration modeled after - Make configuration value optional and for reference only - Refactor code and deprecate BDBRepository and FSFSRepository - Update user manual to reflect new functionality Version 2.10.1 30 Jan 2007 * Fix a few places that still referred only to CD/CD-RW. * Fix typo in definition of actions.constants.DIGEST_EXTENSION. Version 2.10.0 30 Jan 2007 * Add support for DVD writers and DVD+R/DVD+RW media. - Create new writers.dvdwriter module and DvdWriter class - Support 'dvdwriter' device type, and 'dvd+r' and 'dvd+rw' media types - Rework user manual to properly discuss both CDs and DVDs * Support encrypted staging directories (closes: #33). - Create new 'encrypt' extension and associated unit tests - Document new extension in user manual * Support new action ordering mechanism for extensions. - Extensions can now specify dependencies rather than indexes - Rewrote cli._ActionSet class to use DirectedGraph for dependencies - This functionality is not yet "official"; that will happen later * Refactor and clean up code that implements standard actions. - Split action.py into various other files in the actions package - Move a few of the more generic utility functions into util.py - Preserve public interface via imports in otherwise empty action.py - Change various files to import from the new module locations * Revise and simplify the implied "image writer" interface in CdWriter. - Add the new initializeImage() and addImageEntry() methods - Interface is now initializeImage(), addImageEntry() and writeImage() - Rework actions.store.writeImage() to use new writer interface * Refactor CD writer functionality and clean up code. - Create new writers package to hold all image writers - Move image.py into writers/util.py package - Move most of writer.py into writers/cdwriter.py - Move writer.py validate functions into writers/util.py - Move writertests.py into cdwritertests.py - Move imagetests.py into writersutiltests.py - Preserve public interface via imports in otherwise empty files - Change various files to import from the new module locations * More general code cleanup and minor enhancements. - Modify util/test.py to accept named tests on command line - Fix rebuild action to look at store config instead of stage. - Clean up xmlutil imports in mbox and subversion extensions - Copy Mac OS X (darwin) errors from store action into rebuild action - Check arguments to validateScsiId better (no None path allowed now) - Rename variables in config.py to be more consistent with each other - Add new excludeBasenamePatterns flag to FilesystemList - Add new addSelf flag to FilesystemList.addDirContents() - Create new RegexMatchList class in util.py, and add tests - Create new DirectedGraph class in util.py, and add tests - Create new sortDict() function in util.py, and add tests * Create unit tests for functionality that was not explictly tested before. - ActionHook, PreActionHook, PostActionHook, CommandOverride (config.py) - AbsolutePathList, ObjectTypeList, RestrictedContentList (util.py) Version 2.9.0 18 Dec 2006 * Change mbox extension to use ISO-8601 date format when calling grepmail. * Fix error-handling in generateTarfile() when target dir is missing. * Tweak pycheckrc to find fewer expected errors (from standard library). * Fix Debian bug #403546 by supporting more CD writer configurations. - Be looser with SCSI "methods" allowed in valid SCSI id (update regex) - Make config section's parameter optional - Change CdWriter to support "hardware id" as either SCSI id or device - Implement cdrecord commands in terms of hardware id instead of SCSI id - Add documentation in writer.py to discuss how we talk to hardware - Rework user manual's discussion of how to configure SCSI devices * Update Cedar Backup user manual. - Re-order setup procedures to modify cron at end (Debian #403662) - Fix minor typos and misspellings (Debian #403448 among others) - Add discussion about proper ordering of extension actions Version 2.8.1 04 Sep 2006 * Changes to fix, update and properly build Cedar Backup manual - Change DocBook XSL configuration to use "current" stylesheet - Tweak manual-generation rules to work around XSL toolchain issues - Document where to find grepmail utility in Appendix B - Create missing documentation for mbox exclusions configuration - Bumped copyright dates to show "(c) 2005-2006" where needed - Made minor changes to some sections based on proofreading Version 2.8.0 24 Jun 2006 * Remove outdated comment in xmlutil.py about dependency on PyXML. * Tweak wording in doc/docbook.txt to make it clearer. * Consistently rework "project description" everywhere. * Fix some simple typos in various comments and documentation. * Added recursive flag (default True) to FilesystemList.addDirContents(). * Added flat flag (default False) to BackupFileList.generateTarfile(). * Created mbox extension in CedarBackup2.extend.mbox (closes: #31). - Updated user manual to document the new extension and restore process. * Added PostgreSQL extension in CedarBackup2.extend.postgresql (closes: #32). - This code was contributed by user Antoine Beaupre ("The Anarcat"). - I tweaked it slightly, added configuration tests, and updated the manual. - I have no PostgreSQL databases on which to test the functionality. * Made most unit tests run properly on Windows platform, just for fun. * Re-implement Pipe class (under executeCommand) for Python 2.4+ - After Python 2.4, cross-platform subprocess.Popen class is available - Added some new regression tests for executeCommand to stress new Pipe * Switch to newer version of Docbook XSL stylesheet (1.68.1) - The old stylesheet isn't easily available any more (gone from sf.net) - Unfortunately, the PDF output changed somewhat with the new version * Add support for collecting individual files (closes: #30). - Create new config.CollectFile class for use by other classes - Update config.CollectConfig class to contain a list of collect files - Update config.Config class to parse and emit collect file data - Modified collect process in action.py to handle collect files - Updated user manual to discuss new configuraton Version 2.7.2 22 Dec 2005 * Remove some bogus writer tests that depended on an arbitrary SCSI device. Version 2.7.1 13 Dec 2005 * Tweak the CREDITS file to fix a few typos. * Remove completed tasks in TODO file and reorganize it slightly. * Get rid of sys.exit() calls in util/test.py in favor of simple returns. * Fix implementation of BackupFileList.removeUnchanged(captureDigest=True). - Since version 2.7.0, digest only included backed-up (unchanged) files - This release fixes code so digest is captured for all files in the list - Fixed captureDigest test cases, which were testing for wrong results * Make some more updates to the user manual based on further proof-reading. - Rework description of "midnight boundary" warning slightly in basic.xml - Change "Which Linux Distribution?" to "Which Platform?" in config.xml - Fix a few typos and misspellings in basic.xml Version 2.7.0 30 Oct 2005 * Cleanup some maintainer-only (non-distributed) Makefile rules. * Make changes to standardize file headers with other Cedar Solutions code. * Add debug statements to filesystem code (huge increase in debug log size). * Standardize some config variable names ("parentNode" instead of "parent"). * Fix util/test.py to return proper (non-zero) return status upon failure. * No longer attempt to change ownership of files when not running as root. * Remove regression test for bug #25 (testAddFile_036) 'cause it's not portable. * Modify use of user/password in MySQL extension (suggested by Matthias Urlichs). - Make user and password values optional in Cedar Backup configuration - Add a few regression tests to make sure configuration changes work - Add warning when user or password value(s) are visible in process listing - Document use of /root/.my.cnf or ~/.my.cnf in source code and user manual - Rework discussion of command line, file permissions, etc. in user manual * Optimize incremental backup, and hopefully speed it up a bit (closes: #29). - Change BackupFileList.removeUnchanged() to accept a captureDigest flag - This avoids need to call both generateDigestMap() and removeUnchanged() - Note that interface to removeUnchanged was modified, but not broken * Add support for pre- and post-action command hooks (closes: #27). - Added and sections within - Updated user manual documentation for options configuration section - Create new config.PreActionHook and PostActionHook classes to hold hooks - Added new hooks list field on config.OptionsConfig class - Update ActionSet and ActionItem in cli to handle and execute hooks * Rework and abstract XML functionality, plus remove dependency on PyXML. - Refactor general XML utility code out of config.py into xmlutil.py - Create new isElement() function to eliminate need for Node references - Create new createInputDom(), createOutputDom() and serializeDom() functions - Use minidom XML parser rather than PyExpat.reader (much faster) - Hack together xmlutil.Serializer based on xml.dom.ext.PrettyPrint - Remove references to PyXML in manual's depends.xml and install.xml files - Add notes about PyXML code sourced from Fourthought, Inc. in CREDITS - Rework mysql and subversion unit tests in terms of new functions Version 2.6.1 27 Sep 2005 * Fix broken call to node.hasChildNodes (no parens) in config.py. * Make "pre-existing collect indicator" error more obvious (closes: #26). * Avoid failures for UTF-8 filenames on certain filesystems (closes: #25). * Fix FilesystemList to encode excludeList items, preventing UTF-8 failures. Version 2.6.0 12 Sep 2005 * Remove bogus check for remote collect directory on master (closes: #18). * Fix testEncodePath_009 test failure on UTF-8 filesystems (closes: #19). * Fixed several unit tests related to the CollectConfig class (all typos). * Fix filesystem and action code to properly handle path "/" (closes: #24). * Add extension configuration to cback.conf.sample, to clarify things. * Place starting and ending revision numbers into Subversion dump filenames. * Implement resolver mechanism to support paths to commands (closes: #22). - Added section within configuration - Create new config.CommandOverride class to hold overrides - Added new overrides field on config.OptionsConfig class - Create util.PathResolverSingleton class to encapsulate mappings - Create util.resolveCommand convenience function for code to call - Create and call new _setupPathResolver() function in cli code - Change all _CMD constants to _COMMAND, for consistency * Change Subversion extension to support "fsfs" repositories (closes: #20). - Accept "FSFS" repository in configuration section - Create new FSFSRepository class to represent an FSFS repository - Refactor internal code common to both BDB and FSFS repositories - Add and rework test cases to provide coverage of FSFSRepository * Port to Darwin (Mac OS X) and ensure that all regression tests pass. - Don't run testAddDirContents_072() for Darwin (tarball's invalid there) - Write new ISO mount testing methods in terms of Apple's "hdiutil" utility - Accept Darwin-style SCSI writer devices, i.e. "IOCompactDiscServices" - Tweak existing SCSI id pattern to allow spaces in a few other places - Add new regression tests for validateScsiId() utility function - Add code warnings and documentation in manual and in doc/osx * Update, clean up and extend Cedar Backup User Manual (closes: #21). - Work through document and copy-edit it now that it's matured - Add documentation for new options and subversion config items - Exorcise references to Linux which assumed it was "the" platform - Add platform-specific notes for non-Linux platforms (darwin, BSDs) - Clarify purpose of the 'collect' action on the master - Clarify how actions (i.e. 'store') are optional - Clarify that 'all' does not execute extensions - Add an appendix on restoring backups Version 2.5.0 12 Jul 2005 * Update docs to modify use of "secure" (suggested by Lars Wirzenius). * Removed "Not an Official Debian Package" section in software manual. * Reworked Debian install procedure in manual to reference official packages. * Fix manual's build process to create files with mode 664 rather than 755. * Deal better with date boundaries on the store operation (closes: #17). - Add value in configuration - Add warnMidnite field to the StoreConfig object - Add warning in store process for crossing midnite boundary - Change store --full to have more consistent behavior - Update manual to document changes related to this bug Version 2.4.2 23 Apr 2005 * Fix boundaries log message again, properly this time. * Fix a few other log messages that used "," rather than "%". Version 2.4.1 22 Apr 2005 * Fix minor typos in user manual and source code documentation. * Properly annotate code implemented based on Python 2.3 source. * Add info within CREDITS about Python 2.3 and Docbook XSL licenses. * Fix logging for boundaries values (can't print None[0], duh). Version 2.4.0 02 Apr 2005 * Re-license manual under "GPL with clarifications" to satisfy DFSG. * Rework our unmount solution again to try and fix observed problems. - Sometimes, unmount seems to "work" but leaves things mounted. - This might be because some file is not yet completely closed. - We try to work around this by making repeated unmount attempts. - This logic is now encapsulated in util.mount() and util.unmount(). - This solution should also be more portable to non-Linux systems. Version 2.3.1 23 Mar 2005 * Attempt to deal more gracefully with corrupted media. * Unmount media using -l ("lazy unmount") in consistency check. * Be more verbose about media errors during consistency check. Version 2.3.0 10 Mar 2005 * Make 'extend' package public by listing it in CedarBackup2/__init__.py. * Reimplement digest generation to use incremental method (now ~3x faster). * Tweak manifest to be a little more selective about what's distributed. Version 2.2.0 09 Mar 2005 * Fix bug related to execution of commands with huge output. * Create custom class util.Pipe, inheriting from popen2.Popen4. * Re-implement util.executeCommand() in terms of util.Pipe. * Change ownership of sysinfo files to backup user/group after write. Version 2.1.3 08 Mar 2005 * In sysinfo extension, explicitly path to /sbin/fdisk command. * Modify behavior and logging when optional sysinfo commands are not found. * Add extra logging around boundaries and capacity calculations in writer.py. * In executeCommand, log command using output logger as well as debug level. * Docs now suggest --output in cron command line to aid problem diagnosis. * Fix bug in capacity calculation, this time for media with a single session. * Validate all capacity code against v1.0 code, making changes as needed. * Re-evaluate all capacity-related regression tests against v1.0 code. * Add new regression tests for capacity bugs which weren't already detected. Version 2.1.2 07 Mar 2005 * Fix a few extension error messages with incorrect (missing) arguments. * In sysinfo extension, do not log ls and dpkg output to the debug log. * Fix CdWriter, which reported negative capacity when disc was almost full. * Make displayBytes deal properly with negative values via math.fabs(). * Change displayBytes to default to 2 digits after the decimal point. Version 2.1.1 06 Mar 2005 * Fix bug in setup.py (need to install extensions properly). Version 2.1.0 06 Mar 2005 * Fixed doc/cback.1 .TH line to give proper manpage section. * Updated README to more completely describe what Cedar Backup is. * Fix a few logging statements for the collect action, to be clearer. * Fix regression tests that failed in a Debian pbuilder environment. * Add simple main routine to cli.py, so executing it is the same as cback. * Added optional outputFile and doNotLog parameters to util.executeCommand(). * Display byte quantities in sensible units (i.e. bytes, kB, MB) when logged. * Refactored private code into public in action.py and config.py. * Created MySQL extension in CedarBackup2.extend.mysql. * Created sysinfo extension in CedarBackup2.extend.sysinfo. * Created Subversion extension in CedarBackup2.extend.subversion. * Added regression tests as needed for new extension functionality. * Added Chapter 5, Official Extensions in the user manual. Version 2.0.0 26 Feb 2005 * Complete ground-up rewrite for 2.0.0 release. * See doc/release.txt for more details about changes. Version 1.13 25 Jan 2005 * Fix boundaries calculation when using kernel >= 2.6.8 (closes: #16). * Look for a matching boundaries pattern among all lines, not just the first. Version 1.12 16 Jan 2005 * Add support for ATAPI devices, just like ATA (closes: #15). * SCSI id can now be in the form '[ATA:|ATAPI:]scsibus,target,lun'. Version 1.11 17 Oct 2004 * Add experimental support for new Linux 2.6 ATA CD devices. * SCSI id can now be in the form '[ATA:]scsibus,target,lun'. * Internally, the SCSI id is now stored as a string, not a list. * Cleaned up 'cdrecord' calls in cdr.py to make them consistent. * Fixed a pile of warnings noticed by the latest pychecker. Version 1.10 01 Dec 2003 * Removed extraneous error parameter from cback's version() function. * Changed copyright statement and year; added COPYRIGHT in release.py. * Reworked all file headers to match new Cedar Solutions standard. * Removed __version__ and __date__ values with switch to Subversion. * Convert to tabs in Changelog to make the Vim syntax file happy. * Be more stringent in validating contents of SCSI triplet values. * Fixed bug when using modulo 1 (% 1) in a few places. * Fixed shell-interpolation bug discovered by Rick Low (security hole). * Replace all os.popen() calls with new execute_command() call for safety. Version 1.9 09 Nov 2002 * Packaging changes to allow Debian version to be "normal", not Debian-native. * Added CedarBackup/release.py to contain "upstream" release number. * Added -V,--version option to cback script. * Rewrote parts of Makefile to remove most Debian-specific rules. * Changed Makefile and setup.py to get version info from release.py. * The setup.py script now references /usr/bin/env python, not python2.2. * Debian-related changes will now reside exclusively in debian/changelog. Version 1.8 14 Oct 2002 * Fix bug with the way the default mode is displayed in the help screen. Version 1.7 14 Oct 2002 * Bug fix. Upgrade to Python 2.2.2b1 exposed a flaw in my version-check code. Version 1.6 06 Oct 2002 * Debian packaging cleanup (should have been a Debian-only release 1.5-2). Version 1.5 19 Sep 2002 * Changed cback script to more closely control ownership of logfile. Version 1.4 10 Sep 2002 * Various packaging cleanups. * Fixed code that reported negative capacity on a full disc. * Now blank disc ahead of time if it needs to be blanked. * Moved to Python2.2 for cleaner packaging (True, False, etc.) Version 1.3 20 Aug 2002 * Initial "public" release. ----------------------------------------------------------------------------- vim: set ft=changelog noexpandtab: CedarBackup2-2.22.0/README0000664000175000017500000000414411413004475016427 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 2 # Revision : $Id: README 921 2008-05-06 02:12:52Z pronovic $ # Purpose : README for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. This is release 2 of the Cedar Backup package. It represents a complete rewrite of the original package. The new code is cleaner, more compact, more focused and also more "pythonic" in its approach (although the coding style has arguably been influenced by my experiences with Java over the last few years). There is also now an extensive unit test suite, something the first release always lacked. For more information, see the Cedar Backup web site: http://cedar-backup.sourceforge.net/ If you regularly use Cedar Backup, you might also want to join the low-volume cedar-backup-users mailing list, which you can subscribe to via the SourceForge site. This list is used to announce new releases of Cedar Backup, and you can use it to report bugs or to get help using Cedar Backup. CedarBackup2-2.22.0/INSTALL0000664000175000017500000000267111415155732016610 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 2 # Revision : $Id: INSTALL 998 2010-07-07 19:56:08Z pronovic $ # Purpose : INSTALL instructions for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This module is distributed in standard Python distutils form. Use: python setup.py --help For more information on how to install it. You must have a Python interpreter version 2.5 or better to use these modules. In the simplest case, you will probably just use: python setup.py install to install to your standard Python site-packages directory. Note that on UNIX systems, you will probably need to do this as root. The documentation and unit tests provided with this distribution are not installed by setup.py. You may put them wherever you would like. You may wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. Please make sure to include the diagnostic information printed out at the beginning of the test run. CedarBackup2-2.22.0/CedarBackup2/0002775000175000017500000000000012143054371017775 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/CedarBackup2/cli.py0000664000175000017500000022556511645150366021142 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: cli.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides command-line interface implementation. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides command-line interface implementation for the cback script. Summary ======= The functionality in this module encapsulates the command-line interface for the cback script. The cback script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too). The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway. Backwards Compatibility ======================= The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid. @var DEFAULT_CONFIG: The default configuration file. @var DEFAULT_LOGFILE: The default log file path. @var DEFAULT_OWNERSHIP: Default ownership for the logfile. @var DEFAULT_MODE: Default file permissions mode on the logfile. @var VALID_ACTIONS: List of valid actions. @var COMBINE_ACTIONS: List of actions which can be combined with other actions. @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import getopt # Cedar Backup modules from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup2.customize import customizeOverrides from CedarBackup2.util import DirectedGraph, PathResolverSingleton from CedarBackup2.util import sortDict, splitCommandLine, executeCommand, getFunctionReference from CedarBackup2.util import getUidGid, encodePath, Diagnostics from CedarBackup2.config import Config from CedarBackup2.peer import RemotePeer from CedarBackup2.actions.collect import executeCollect from CedarBackup2.actions.stage import executeStage from CedarBackup2.actions.store import executeStore from CedarBackup2.actions.purge import executePurge from CedarBackup2.actions.rebuild import executeRebuild from CedarBackup2.actions.validate import executeValidate from CedarBackup2.actions.initialize import executeInitialize ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.cli") DISK_LOG_FORMAT = "%(asctime)s --> [%(levelname)-7s] %(message)s" DISK_OUTPUT_FORMAT = "%(message)s" SCREEN_LOG_FORMAT = "%(message)s" SCREEN_LOG_STREAM = sys.stdout DATE_FORMAT = "%Y-%m-%dT%H:%M:%S %Z" DEFAULT_CONFIG = "/etc/cback.conf" DEFAULT_LOGFILE = "/var/log/cback.log" DEFAULT_OWNERSHIP = [ "root", "adm", ] DEFAULT_MODE = 0640 REBUILD_INDEX = 0 # can't run with anything else, anyway VALIDATE_INDEX = 0 # can't run with anything else, anyway INITIALIZE_INDEX = 0 # can't run with anything else, anyway COLLECT_INDEX = 100 STAGE_INDEX = 200 STORE_INDEX = 300 PURGE_INDEX = 400 VALID_ACTIONS = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] COMBINE_ACTIONS = [ "collect", "stage", "store", "purge", ] NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] SHORT_SWITCHES = "hVbqc:fMNl:o:m:OdsD" LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet', 'config=', 'full', 'managed', 'managed-only', 'logfile=', 'owner=', 'mode=', 'output', 'debug', 'stack', 'diagnostics', ] ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback} script. Essentially, this is the "main routine" for the cback script. It does all of the argument processing for the script, and then sets about executing the indicated actions. As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process). The C{'all'} action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the C{'all'} action. Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 2.5 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing specified backup actions @note: This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.) @note: We assume that anything that I{must} be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to C{sys.stderr}. @return: Error code as described above. """ try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5]: sys.stderr.write("Python version 2.5 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python version 2.5 or greater required.\n") return 1 try: options = Options(argumentList=sys.argv[1:]) logger.info("Specified command-line actions: " % options.actions) except Exception, e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 try: logfile = setupLogging(options) except Exception, e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup run started.") logger.info("Options were [%s]" % options) logger.info("Logfile is [%s]" % logfile) Diagnostics().logDiagnostics(method=logger.info) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config executeLocal = True executeManaged = False if options.managedOnly: executeLocal = False executeManaged = True if options.managed: executeManaged = True logger.debug("Execute local actions: %s" % executeLocal) logger.debug("Execute managed actions: %s" % executeManaged) try: logger.info("Configuration path is [%s]" % configPath) config = Config(xmlPath=configPath) customizeOverrides(config) setupPathResolver(config) actionSet = _ActionSet(options.actions, config.extensions, config.options, config.peers, executeManaged, executeLocal) except Exception, e: logger.error("Error reading or handling configuration: %s" % e) logger.info("Cedar Backup run completed with status 4.") return 4 if options.stacktrace: actionSet.executeActions(configPath, options, config) else: try: actionSet.executeActions(configPath, options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup run completed with status 5.") return 5 except Exception, e: logger.error("Error executing backup: %s" % e) logger.info("Cedar Backup run completed with status 6.") return 6 logger.info("Cedar Backup run completed with status 0.") return 0 ######################################################################## # Action-related class definition ######################################################################## #################### # _ActionItem class #################### class _ActionItem(object): """ Class representing a single action to be executed. This class represents a single named action to be executed, and understands how to execute that action. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set). @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 0 def __init__(self, index, name, preHook, postHook, function): """ Default constructor. It's OK to pass C{None} for C{index}, C{preHook} or C{postHook}, but not for C{name}. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param preHook: Pre-action hook in terms of an C{ActionHook} object, or C{None}. @param postHook: Post-action hook in terms of an C{ActionHook} object, or C{None}. @param function: Reference to function associated with item. """ self.index = index self.name = name self.preHook = preHook self.postHook = postHook self.function = function def __cmp__(self, other): """ Definition of equals operator for this class. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if self.SORT_ORDER < other.SORT_ORDER: return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the action associated with an item, including hooks. See class notes for more details on how the action is executed. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ logger.debug("Executing [%s] action." % self.name) if self.preHook is not None: self._executeHook("pre-action", self.preHook) self._executeAction(configPath, options, config) if self.postHook is not None: self._executeHook("post-action", self.postHook) def _executeAction(self, configPath, options, config): """ Executes the action, specifically the function associated with the action. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. """ name = "%s.%s" % (self.function.__module__, self.function.__name__) logger.debug("Calling action function [%s], execution index [%d]" % (name, self.index)) self.function(configPath, options, config) def _executeHook(self, type, hook): # pylint: disable=W0622,R0201 """ Executes a hook command via L{util.executeCommand()}. @param type: String describing the type of hook, for logging. @param hook: Hook, in terms of a C{ActionHook} object. """ logger.debug("Executing %s hook for action [%s]." % (type, hook.action)) fields = splitCommandLine(hook.command) executeCommand(command=fields[0:1], args=fields[1:]) ########################### # _ManagedActionItem class ########################### class _ManagedActionItem(object): """ Class representing a single action to be executed on a managed peer. This class represents a single named action to be executed, and understands how to execute that action. Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself. @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 1 def __init__(self, index, name, remotePeers): """ Default constructor. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param remotePeers: List of remote peers on which to execute the action. """ self.index = index self.name = name self.remotePeers = remotePeers def __cmp__(self, other): """ Definition of equals operator for this class. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if self.SORT_ORDER < other.SORT_ORDER: return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the managed action associated with an item. @note: Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface. @note: Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ for peer in self.remotePeers: logger.debug("Executing managed action [%s] on peer [%s]." % (self.name, peer.name)) try: peer.executeManagedAction(self.name, options.full) except IOError, e: logger.error(e) # log the message and go on, so we don't kill the backup ################### # _ActionSet class ################### class _ActionSet(object): """ Class representing a set of local actions to be executed. This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with other actions. Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order. Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action. Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers. @sort: __init__, executeActions """ def __init__(self, actions, extensions, options, peers, managed, local): """ Constructor for the C{_ActionSet} class. This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input: - C{extensionNames}: List of extensions available in configuration - C{preHookMap}: Mapping from action name to pre C{ActionHook} - C{preHookMap}: Mapping from action name to post C{ActionHook} - C{functionMap}: Mapping from action name to Python function - C{indexMap}: Mapping from action name to execution index - C{peerMap}: Mapping from action name to set of C{RemotePeer} - C{actionMap}: Mapping from action name to C{_ActionItem} Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build C{self.actionSet}, the set action items to be executed by C{executeActions()}. This list might contain either C{_ActionItem} or C{_ManagedActionItem}. @param actions: Names of actions specified on the command-line. @param extensions: Extended action configuration (i.e. config.extensions) @param options: Options configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @raise ValueError: If one of the specified actions is invalid. """ extensionNames = _ActionSet._deriveExtensionNames(extensions) (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) functionMap = _ActionSet._buildFunctionMap(extensions) indexMap = _ActionSet._buildIndexMap(extensions) peerMap = _ActionSet._buildPeerMap(options, peers) actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap) _ActionSet._validateActions(actions, extensionNames) self.actionSet = _ActionSet._buildActionSet(actions, actionMap) @staticmethod def _deriveExtensionNames(extensions): """ Builds a list of extended actions that are available in configuration. @param extensions: Extended action configuration (i.e. config.extensions) @return: List of extended action names. """ extensionNames = [] if extensions is not None and extensions.actions is not None: for action in extensions.actions: extensionNames.append(action.name) return extensionNames @staticmethod def _buildHookMaps(hooks): """ Build two mappings from action name to configured C{ActionHook}. @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) @return: Tuple of (pre hook dictionary, post hook dictionary). """ preHookMap = {} postHookMap = {} if hooks is not None: for hook in hooks: if hook.before: preHookMap[hook.action] = hook elif hook.after: postHookMap[hook.action] = hook return (preHookMap, postHookMap) @staticmethod def _buildFunctionMap(extensions): """ Builds a mapping from named action to action function. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action to function. """ functionMap = {} functionMap['rebuild'] = executeRebuild functionMap['validate'] = executeValidate functionMap['initialize'] = executeInitialize functionMap['collect'] = executeCollect functionMap['stage'] = executeStage functionMap['store'] = executeStore functionMap['purge'] = executePurge if extensions is not None and extensions.actions is not None: for action in extensions.actions: functionMap[action.name] = getFunctionReference(action.module, action.function) return functionMap @staticmethod def _buildIndexMap(extensions): """ Builds a mapping from action name to proper execution index. If extensions configuration is C{None}, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices. Otherwise, if the extensions order mode is C{None} or C{"index"}, actions will scheduled by explicit index; and if the extensions order mode is C{"dependency"}, actions will be scheduled using a dependency graph. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action name to integer execution index. """ indexMap = {} if extensions is None or extensions.actions is None or extensions.actions == []: logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") logger.info("Action order will be: %s" % sortDict(indexMap)) else: if extensions.orderMode is None or extensions.orderMode == "index": logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") for action in extensions.actions: indexMap[action.name] = action.index logger.debug("Completed filling in action indices for extended actions.") logger.info("Action order will be: %s" % sortDict(indexMap)) else: logger.info("Action ordering will use 'dependency' order mode.") graph = DirectedGraph("dependencies") graph.createVertex("rebuild") graph.createVertex("validate") graph.createVertex("initialize") graph.createVertex("collect") graph.createVertex("stage") graph.createVertex("store") graph.createVertex("purge") for action in extensions.actions: graph.createVertex(action.name) graph.createEdge("collect", "stage") # Collect must run before stage, store or purge graph.createEdge("collect", "store") graph.createEdge("collect", "purge") graph.createEdge("stage", "store") # Stage must run before store or purge graph.createEdge("stage", "purge") graph.createEdge("store", "purge") # Store must run before purge for action in extensions.actions: if action.dependencies.beforeList is not None: for vertex in action.dependencies.beforeList: try: graph.createEdge(action.name, vertex) # actions that this action must be run before except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown." % (vertex, action.name)) raise ValueError("Unable to determine proper action order due to invalid dependency.") if action.dependencies.afterList is not None: for vertex in action.dependencies.afterList: try: graph.createEdge(vertex, action.name) # actions that this action must be run after except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown." % (vertex, action.name)) raise ValueError("Unable to determine proper action order due to invalid dependency.") try: ordering = graph.topologicalSort() indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) logger.info("Action order will be: %s" % ordering) except ValueError: logger.error("Unable to determine proper action order due to dependency recursion.") logger.error("Extensions configuration is invalid (check for loops).") raise ValueError("Unable to determine proper action order due to dependency recursion.") return indexMap @staticmethod def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap): """ Builds a mapping from action name to list of action items. We build either C{_ActionItem} or C{_ManagedActionItem} objects here. In most cases, the mapping from action name to C{_ActionItem} is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each C{_ActionItem} will be created with a proper function reference and index value for execution ordering. The mapping from action name to C{_ManagedActionItem} is always 1:1. Each managed action item contains a list of peers which the action should be executed. @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @param extensionNames: List of valid extended action names @param functionMap: Dictionary mapping action name to Python function @param indexMap: Dictionary mapping action name to integer execution index @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action @return: Dictionary mapping action name to list of C{_ActionItem} objects. """ actionMap = {} for name in extensionNames + VALID_ACTIONS: if name != 'all': # do this one later function = functionMap[name] index = indexMap[name] actionMap[name] = [] if local: (preHook, postHook) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) actionMap[name].append(_ActionItem(index, name, preHook, postHook, function)) if managed: if name in peerMap: actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] return actionMap @staticmethod def _buildPeerMap(options, peers): """ Build a mapping from action name to list of remote peers. There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping. @param options: Option configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) """ peerMap = {} if peers is not None: if peers.remotePeers is not None: for peer in peers.remotePeers: if peer.managed: remoteUser = _ActionSet._getRemoteUser(options, peer) rshCommand = _ActionSet._getRshCommand(options, peer) cbackCommand = _ActionSet._getCbackCommand(options, peer) managedActions = _ActionSet._getManagedActions(options, peer) remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, options.backupUser, rshCommand, cbackCommand) if managedActions is not None: for managedAction in managedActions: if managedAction in peerMap: if remotePeer not in peerMap[managedAction]: peerMap[managedAction].append(remotePeer) else: peerMap[managedAction] = [ remotePeer, ] return peerMap @staticmethod def _deriveHooks(action, preHookDict, postHookDict): """ Derive pre- and post-action hooks, if any, associated with named action. @param action: Name of action to look up @param preHookDict: Dictionary mapping pre-action hooks to action name @param postHookDict: Dictionary mapping post-action hooks to action name @return Tuple (preHook, postHook) per mapping, with None values if there is no hook. """ preHook = None postHook = None if preHookDict.has_key(action): preHook = preHookDict[action] if postHookDict.has_key(action): postHook = postHookDict[action] return (preHook, postHook) @staticmethod def _validateActions(actions, extensionNames): """ Validate that the set of specified actions is sensible. Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within L{NONCOMBINE_ACTIONS} may not be combined with other actions. @param actions: Names of actions specified on the command-line. @param extensionNames: Names of extensions specified in configuration. @raise ValueError: If one or more configured actions are not valid. """ if actions is None or actions == []: raise ValueError("No actions specified.") for action in actions: if action not in VALID_ACTIONS and action not in extensionNames: raise ValueError("Action [%s] is not a valid action or extended action." % action) for action in NONCOMBINE_ACTIONS: if action in actions and actions != [ action, ]: raise ValueError("Action [%s] may not be combined with other actions." % action) @staticmethod def _buildActionSet(actions, actionMap): """ Build set of actions to be executed. The set of actions is built in the proper order, so C{executeActions} can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution. @param actions: Names of actions specified on the command-line. @param actionMap: Dictionary mapping action name to C{_ActionItem} object. @return: Set of action items in proper order. """ actionSet = [] for action in actions: actionSet.extend(actionMap[action]) actionSet.sort() # sort the actions in order by index return actionSet def executeActions(self, configPath, options, config): """ Executes all actions and extended actions, in the proper order. Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action functions. @param config: Parsed configuration to be passed to action functions. @raise Exception: If there is a problem executing the actions. """ logger.debug("Executing local actions.") for actionItem in self.actionSet: actionItem.executeAction(configPath, options, config) @staticmethod def _getRemoteUser(options, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return options.backupUser return remotePeer.remoteUser @staticmethod def _getRshCommand(options, remotePeer): """ Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: RSH command associated with remote peer. """ if remotePeer.rshCommand is None: return options.rshCommand return remotePeer.rshCommand @staticmethod def _getCbackCommand(options, remotePeer): """ Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: cback command associated with remote peer. """ if remotePeer.cbackCommand is None: return options.cbackCommand return remotePeer.cbackCommand @staticmethod def _getManagedActions(options, remotePeer): """ Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Set of managed actions associated with remote peer. """ if remotePeer.managedActions is None: return options.managedActions return remotePeer.managedActions ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback [switches] action(s)\n") fd.write("\n") fd.write(" The following switches are accepted:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -f, --full Perform a full backup, regardless of configuration\n") fd.write(" -M, --managed Include managed clients when executing actions\n") fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") fd.write("\n") fd.write(" The following actions may be specified:\n") fd.write("\n") fd.write(" all Take all normal actions (collect, stage, store, purge)\n") fd.write(" collect Take the collect action\n") fd.write(" stage Take the stage action\n") fd.write(" store Take the store action\n") fd.write(" purge Take the purge action\n") fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") fd.write(" validate Validate configuration only\n") fd.write(" initialize Initialize media for use with Cedar Backup\n") fd.write("\n") fd.write(" You may also specify extended actions that have been defined in\n") fd.write(" configuration.\n") fd.write("\n") fd.write(" You must specify at least one action to take. More than one of\n") fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") fd.write(" extended actions may be specified in any arbitrary order; they\n") fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") fd.write(" other actions.\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ########################## # setupLogging() function ########################## def setupLogging(options): """ Set up logging based on command-line options. There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (C{CedarBackup2.log} and C{CedarBackup2.output}). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level. By default, output logging is disabled. When the C{options.output} or C{options.debug} flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen. By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the C{options.quiet} flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the C{options.verbose} flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the C{options.debug} flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile. @param options: Command-line options. @type options: L{Options} object @return: Path to logfile on disk. """ logfile = _setupLogfile(options) _setupFlowLogging(logfile, options) _setupOutputLogging(logfile, options) return logfile def _setupLogfile(options): """ Sets up and creates logfile as needed. If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group. @note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using C{O_CREAT}). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback run concurrently or if cback collides with logrotate or something. @param options: Command-line options. @return: Path to logfile on disk. """ if options.logfile is None: logfile = DEFAULT_LOGFILE else: logfile = options.logfile if not os.path.exists(logfile): if options.mode is None: os.fdopen(os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, DEFAULT_MODE), "a+").write("") else: os.fdopen(os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, options.mode), "a+").write("") try: if options.owner is None or len(options.owner) < 2: (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) else: (uid, gid) = getUidGid(options.owner[0], options.owner[1]) os.chown(logfile, uid, gid) except: pass return logfile def _setupFlowLogging(logfile, options): """ Sets up flow logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ flowLogger = logging.getLogger("CedarBackup2.log") flowLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskFlowLogging(flowLogger, logfile, options) _setupScreenFlowLogging(flowLogger, options) def _setupOutputLogging(logfile, options): """ Sets up command output logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ outputLogger = logging.getLogger("CedarBackup2.output") outputLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskOutputLogging(outputLogger, logfile, options) def _setupDiskFlowLogging(flowLogger, logfile, options): """ Sets up on-disk flow logging. @param flowLogger: Python flow logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) flowLogger.addHandler(handler) def _setupScreenFlowLogging(flowLogger, options): """ Sets up on-screen flow logging. @param flowLogger: Python flow logger object. @param options: Command-line options. """ formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) handler = logging.StreamHandler(SCREEN_LOG_STREAM) handler.setFormatter(formatter) if options.quiet: handler.setLevel(logging.CRITICAL) # effectively turn it off elif options.verbose: if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) else: handler.setLevel(logging.ERROR) flowLogger.addHandler(handler) def _setupDiskOutputLogging(outputLogger, logfile, options): """ Sets up on-disk command output logging. @param outputLogger: Python command output logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug or options.output: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.CRITICAL) # effectively turn it off outputLogger.addHandler(handler) ############################### # setupPathResolver() function ############################### def setupPathResolver(config): """ Set up the path resolver singleton based on configuration. Cedar Backup's path resolver is implemented in terms of a singleton, the L{PathResolverSingleton} class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton. @param config: Configuration @type config: L{Config} object """ mapping = {} if config.options.overrides is not None: for override in config.options.overrides: mapping[override.command] = override.absolutePath singleton = PathResolverSingleton() singleton.fill(mapping) ######################################################################### # Options class definition ######################################################################## class Options(object): ###################### # Class documentation ###################### """ Class representing command-line options for the cback script. The C{Options} class is a Python object representation of the command-line options of the cback script. The object representation is two-way: a command line string or a list of command line arguments can be used to create an C{Options} object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An C{Options} object can even be created from scratch programmatically (if you have a need for that). There are two main levels of validation in the C{Options} class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to fields if you are programmatically filling an object. The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Options.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Options} object from a command line and before exporting a C{Options} object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__ """ ############## # Constructor ############## def __init__(self, argumentList=None, argumentString=None, validate=True): """ Initializes an options object. If you initialize the object without passing either C{argumentList} or C{argumentString}, the object will be empty and will be invalid until it is filled in properly. No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. The argument list is assumed to be a list of arguments, not including the name of the command, something like C{sys.argv[1:]}. If you pass C{sys.argv} instead, things are not going to work. The argument string will be parsed into an argument list by the L{util.splitCommandLine} function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to C{sys.argv[1:]}, just like C{argumentList}. Unless the C{validate} argument is C{False}, the L{Options.validate} method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in command line, so an exception might still be raised. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid command line arguments. @param argumentList: Command line for a program. @type argumentList: List of arguments, i.e. C{sys.argv} @param argumentString: Command line for a program. @type argumentString: String, i.e. "cback --verbose stage store" @param validate: Validate the command line after parsing it. @type validate: Boolean true/false. @raise getopt.GetoptError: If the command-line arguments could not be parsed. @raise ValueError: If the command-line arguments are invalid. """ self._help = False self._version = False self._verbose = False self._quiet = False self._config = None self._full = False self._managed = False self._managedOnly = False self._logfile = None self._owner = None self._mode = None self._output = False self._debug = False self._stacktrace = False self._diagnostics = False self._actions = None self.actions = [] # initialize to an empty list; remainder are OK if argumentList is not None and argumentString is not None: raise ValueError("Use either argumentList or argumentString, but not both.") if argumentString is not None: argumentList = splitCommandLine(argumentString) if argumentList is not None: self._parseArgumentList(argumentList) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return self.buildArgumentString(validate=False) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.help != other.help: if self.help < other.help: return -1 else: return 1 if self.version != other.version: if self.version < other.version: return -1 else: return 1 if self.verbose != other.verbose: if self.verbose < other.verbose: return -1 else: return 1 if self.quiet != other.quiet: if self.quiet < other.quiet: return -1 else: return 1 if self.config != other.config: if self.config < other.config: return -1 else: return 1 if self.full != other.full: if self.full < other.full: return -1 else: return 1 if self.managed != other.managed: if self.managed < other.managed: return -1 else: return 1 if self.managedOnly != other.managedOnly: if self.managedOnly < other.managedOnly: return -1 else: return 1 if self.logfile != other.logfile: if self.logfile < other.logfile: return -1 else: return 1 if self.owner != other.owner: if self.owner < other.owner: return -1 else: return 1 if self.mode != other.mode: if self.mode < other.mode: return -1 else: return 1 if self.output != other.output: if self.output < other.output: return -1 else: return 1 if self.debug != other.debug: if self.debug < other.debug: return -1 else: return 1 if self.stacktrace != other.stacktrace: if self.stacktrace < other.stacktrace: return -1 else: return 1 if self.diagnostics != other.diagnostics: if self.diagnostics < other.diagnostics: return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 ############# # Properties ############# def _setHelp(self, value): """ Property target used to set the help flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._help = True else: self._help = False def _getHelp(self): """ Property target used to get the help flag. """ return self._help def _setVersion(self, value): """ Property target used to set the version flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._version = True else: self._version = False def _getVersion(self): """ Property target used to get the version flag. """ return self._version def _setVerbose(self, value): """ Property target used to set the verbose flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verbose = True else: self._verbose = False def _getVerbose(self): """ Property target used to get the verbose flag. """ return self._verbose def _setQuiet(self, value): """ Property target used to set the quiet flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._quiet = True else: self._quiet = False def _getQuiet(self): """ Property target used to get the quiet flag. """ return self._quiet def _setConfig(self, value): """ Property target used to set the config parameter. """ if value is not None: if len(value) < 1: raise ValueError("The config parameter must be a non-empty string.") self._config = value def _getConfig(self): """ Property target used to get the config parameter. """ return self._config def _setFull(self, value): """ Property target used to set the full flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._full = True else: self._full = False def _getFull(self): """ Property target used to get the full flag. """ return self._full def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedOnly(self, value): """ Property target used to set the managedOnly flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managedOnly = True else: self._managedOnly = False def _getManagedOnly(self): """ Property target used to get the managedOnly flag. """ return self._managedOnly def _setLogfile(self, value): """ Property target used to set the logfile parameter. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The logfile parameter must be a non-empty string.") self._logfile = encodePath(value) def _getLogfile(self): """ Property target used to get the logfile parameter. """ return self._logfile def _setOwner(self, value): """ Property target used to set the owner parameter. If not C{None}, the owner must be a C{(user,group)} tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple. @raise ValueError: If the value is not valid. """ if value is None: self._owner = None else: if isinstance(value, str): raise ValueError("Must specify user and group tuple for owner parameter.") if len(value) != 2: raise ValueError("Must specify user and group tuple for owner parameter.") if len(value[0]) < 1 or len(value[1]) < 1: raise ValueError("User and group tuple values must be non-empty strings.") self._owner = (value[0], value[1]) def _getOwner(self): """ Property target used to get the owner parameter. The parameter is a tuple of C{(user, group)}. """ return self._owner def _setMode(self, value): """ Property target used to set the mode parameter. """ if value is None: self._mode = None else: try: if isinstance(value, str): value = int(value, 8) else: value = int(value) except TypeError: raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") if value < 0: raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") self._mode = value def _getMode(self): """ Property target used to get the mode parameter. """ return self._mode def _setOutput(self, value): """ Property target used to set the output flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._output = True else: self._output = False def _getOutput(self): """ Property target used to get the output flag. """ return self._output def _setDebug(self, value): """ Property target used to set the debug flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._debug = True else: self._debug = False def _getDebug(self): """ Property target used to get the debug flag. """ return self._debug def _setStacktrace(self, value): """ Property target used to set the stacktrace flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._stacktrace = True else: self._stacktrace = False def _getStacktrace(self): """ Property target used to get the stacktrace flag. """ return self._stacktrace def _setDiagnostics(self, value): """ Property target used to set the diagnostics flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._diagnostics = True else: self._diagnostics = False def _getDiagnostics(self): """ Property target used to get the diagnostics flag. """ return self._diagnostics def _setActions(self, value): """ Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else. @raise ValueError: If the value is not valid. """ if value is None: self._actions = None else: try: saved = self._actions self._actions = [] self._actions.extend(value) except Exception, e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") actions = property(_getActions, _setActions, None, "Command-line actions list.") ################## # Utility methods ################## def validate(self): """ Validates command-line options represented by the object. Unless C{--help} or C{--version} are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @raise ValueError: If one of the validations fails. """ if not self.help and not self.version and not self.diagnostics: if self.actions is None or len(self.actions) == 0: raise ValueError("At least one action must be specified.") if self.managed and self.managedOnly: raise ValueError("The --managed and --managed-only options may not be combined.") def buildArgumentList(self, validate=True): """ Extracts options into a list of command line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the C{argumentList} parameter. Unlike L{buildArgumentString}, string arguments are not quoted here, because there is no need for it. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: List representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentList = [] if self._help: argumentList.append("--help") if self.version: argumentList.append("--version") if self.verbose: argumentList.append("--verbose") if self.quiet: argumentList.append("--quiet") if self.config is not None: argumentList.append("--config") argumentList.append(self.config) if self.full: argumentList.append("--full") if self.managed: argumentList.append("--managed") if self.managedOnly: argumentList.append("--managed-only") if self.logfile is not None: argumentList.append("--logfile") argumentList.append(self.logfile) if self.owner is not None: argumentList.append("--owner") argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) if self.mode is not None: argumentList.append("--mode") argumentList.append("%o" % self.mode) if self.output: argumentList.append("--output") if self.debug: argumentList.append("--debug") if self.stacktrace: argumentList.append("--stack") if self.diagnostics: argumentList.append("--diagnostics") if self.actions is not None: for action in self.actions: argumentList.append(action) return argumentList def buildArgumentString(self, validate=True): """ Extracts options into a string of command-line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes (C{"}). The resulting string will be suitable for passing back to the constructor in the C{argumentString} parameter. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: String representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentString = "" if self._help: argumentString += "--help " if self.version: argumentString += "--version " if self.verbose: argumentString += "--verbose " if self.quiet: argumentString += "--quiet " if self.config is not None: argumentString += "--config \"%s\" " % self.config if self.full: argumentString += "--full " if self.managed: argumentString += "--managed " if self.managedOnly: argumentString += "--managed-only " if self.logfile is not None: argumentString += "--logfile \"%s\" " % self.logfile if self.owner is not None: argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) if self.mode is not None: argumentString += "--mode %o " % self.mode if self.output: argumentString += "--output " if self.debug: argumentString += "--debug " if self.stacktrace: argumentString += "--stack " if self.diagnostics: argumentString += "--diagnostics " if self.actions is not None: for action in self.actions: argumentString += "\"%s\" " % action return argumentString def _parseArgumentList(self, argumentList): """ Internal method to parse a list of command-line arguments. Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the L{validate} method). For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used. @param argumentList: List of arguments to a command. @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} @raise ValueError: If the argument list cannot be successfully parsed. """ switches = { } opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) for o, a in opts: # push the switches into a hash switches[o] = a if switches.has_key("-h") or switches.has_key("--help"): self.help = True if switches.has_key("-V") or switches.has_key("--version"): self.version = True if switches.has_key("-b") or switches.has_key("--verbose"): self.verbose = True if switches.has_key("-q") or switches.has_key("--quiet"): self.quiet = True if switches.has_key("-c"): self.config = switches["-c"] if switches.has_key("--config"): self.config = switches["--config"] if switches.has_key("-f") or switches.has_key("--full"): self.full = True if switches.has_key("-M") or switches.has_key("--managed"): self.managed = True if switches.has_key("-N") or switches.has_key("--managed-only"): self.managedOnly = True if switches.has_key("-l"): self.logfile = switches["-l"] if switches.has_key("--logfile"): self.logfile = switches["--logfile"] if switches.has_key("-o"): self.owner = switches["-o"].split(":", 1) if switches.has_key("--owner"): self.owner = switches["--owner"].split(":", 1) if switches.has_key("-m"): self.mode = switches["-m"] if switches.has_key("--mode"): self.mode = switches["--mode"] if switches.has_key("-O") or switches.has_key("--output"): self.output = True if switches.has_key("-d") or switches.has_key("--debug"): self.debug = True if switches.has_key("-s") or switches.has_key("--stack"): self.stacktrace = True if switches.has_key("-D") or switches.has_key("--diagnostics"): self.diagnostics = True ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": result = cli() sys.exit(result) CedarBackup2-2.22.0/CedarBackup2/actions/0002775000175000017500000000000012143054371021435 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/CedarBackup2/actions/store.py0000664000175000017500000004251111415165677023161 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: store.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'store' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'store' action. @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime import tempfile # Cedar Backup modules from CedarBackup2.filesystem import compareContents from CedarBackup2.util import isStartOfWeek from CedarBackup2.util import mount, unmount, displayBytes from CedarBackup2.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.store") ######################################################################## # Public functions ######################################################################## ########################## # executeStore() function ########################## def executeStore(configPath, options, config): """ Executes the store backup action. @note: The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @note: When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'store' action.") if sys.platform == "darwin": logger.warn("Warning: the store action is not fully supported on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized rebuildMedia = options.full logger.debug("Rebuild media flag [%s]" % rebuildMedia) todayIsStart = isStartOfWeek(config.options.startingDay) stagingDirs = _findCorrectDailyDir(options, config) writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'store' action successfully.") ######################## # writeImage() function ######################## def writeImage(config, newDisc, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. @note: This function is implemented in terms of L{writeImageBlankSafe}. The C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. @param config: Config object. @param newDisc: Indicates whether the disc should be re-initialized @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs) ################################# # writeImageBlankSafe() function ################################# def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. The media will always be written with a media label specific to Cedar Backup. This function is similar to L{writeImage}, but tries to implement a smarter blanking strategy. First, the media is always blanked if the C{rebuildMedia} flag is true. Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} come into effect:: If no blanking behavior is specified, and it is the start of the week, the disc will be blanked If blanking behavior is specified, and either the blank mode is "daily" or the blank mode is "weekly" and it is the start of the week, then the disc will be blanked if it looks like the weekly backup will not fit onto the media. Otherwise, the disc will not be blanked How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:: will backup fit? = (bytes available / (1 + bytes required) <= blankFactor The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right. @param config: Config object. @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ mediaLabel = buildMediaLabel() writer = createWriter(config) writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc for stageDir in stagingDirs.keys(): logger.debug("Adding stage directory [%s]." % stageDir) dateSuffix = stagingDirs[stageDir] writer.addImageEntry(stageDir, dateSuffix) newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) writer.setImageNewDisc(newDisc) writer.writeImage() def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior): """ Gets a value for the newDisc flag based on blanking factor rules. The blanking factor rules are described above by L{writeImageBlankSafe}. @param writer: Previously configured image writer containing image entries @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @return: newDisc flag to be set on writer. """ newDisc = False if rebuildMedia: newDisc = True logger.debug("Setting new disc flag based on rebuildMedia flag.") else: if blankBehavior is None: logger.debug("Default media blanking behavior is in effect.") if todayIsStart: newDisc = True logger.debug("Setting new disc flag based on todayIsStart.") else: # note: validation says we can assume that behavior is fully filled in if it exists at all logger.debug("Optimized media blanking behavior is in effect based on configuration.") if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): logger.debug("New disc flag will be set based on blank factor calculation.") blankFactor = float(blankBehavior.blankFactor) logger.debug("Configured blanking factor: %.2f" % blankFactor) available = writer.retrieveCapacity().bytesAvailable logger.debug("Bytes available: %s" % displayBytes(available)) required = writer.getEstimatedImageSize() logger.debug("Bytes required: %s" % displayBytes(required)) ratio = available / (1.0 + required) logger.debug("Calculated ratio: %.2f" % ratio) newDisc = (ratio <= blankFactor) logger.debug("%.2f <= %.2f ? %s" % (ratio, blankFactor, newDisc)) else: logger.debug("No blank factor calculation is required based on configuration.") logger.debug("New disc flag [%s]." % newDisc) return newDisc ################################# # writeStoreIndicator() function ################################# def writeStoreIndicator(config, stagingDirs): """ Writes a store indicator file into staging directories. The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. """ for stagingDir in stagingDirs.keys(): writeIndicatorFile(stagingDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ############################## # consistencyCheck() function ############################## def consistencyCheck(config, stagingDirs): """ Runs a consistency check against media in the backup device. It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc. The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in C{filesystem.py}. If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with C{info} priority. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") for stagingDir in stagingDirs.keys(): discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) logger.debug("Checking [%s] vs. [%s]." % (stagingDir, discDir)) compareContents(stagingDir, discDir, verbose=True) logger.info("Consistency check completed for [%s]. No problems found." % stagingDir) finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################## # Private utility functions ######################################################################## ######################### # _findCorrectDailyDir() ######################### def _findCorrectDailyDir(options, config): """ Finds the correct daily staging directory to be written to disk. In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite. For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after I{which has not yet been staged, according to the stage indicator file}. The first one I find, I'll use. If I use a directory other than for the current day I{and} C{config.store.warnMidnite} is set, a warning will be put in the log. There is one exception to this rule. If the C{options.full} flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run C{cback --full store} twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written). @note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward. @param options: Options object. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If the staging directory cannot be found. """ oneDay = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - oneDay tomorrow = today + oneDay todayDate = today.strftime(DIR_TIME_FORMAT) yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) todayPath = os.path.join(config.stage.targetDir, todayDate) yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) if options.full: if os.path.isdir(todayPath) and os.path.exists(todayStageInd): logger.info("Store process will use current day's stage directory [%s]" % todayPath) return { todayPath:todayDate } raise IOError("Unable to find staging directory to store (only tried today due to full option).") else: if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): logger.info("Store process will use current day's stage directory [%s]" % todayPath) return { todayPath:todayDate } elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): logger.info("Store process will use previous day's stage directory [%s]" % yesterdayPath) if config.store.warnMidnite: logger.warn("Warning: store process crossed midnite boundary to find data.") return { yesterdayPath:yesterdayDate } elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): logger.info("Store process will use next day's stage directory [%s]" % tomorrowPath) if config.store.warnMidnite: logger.warn("Warning: store process crossed midnite boundary to find data.") return { tomorrowPath:tomorrowDate } raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).") CedarBackup2-2.22.0/CedarBackup2/actions/collect.py0000664000175000017500000005355711645134612023454 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: collect.py 1020 2011-10-11 21:47:53Z pronovic $ # Purpose : Implements the standard 'collect' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'collect' action. @sort: executeCollect @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle # Cedar Backup modules from CedarBackup2.filesystem import BackupFileList, FilesystemList from CedarBackup2.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath from CedarBackup2.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR from CedarBackup2.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.collect") ######################################################################## # Public functions ######################################################################## ############################ # executeCollect() function ############################ def executeCollect(configPath, options, config): """ Executes the collect backup action. @note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise TarError: If there is a problem creating a tar file """ logger.debug("Executing the 'collect' action.") if config.options is None or config.collect is None: raise ValueError("Collect configuration is not properly filled in.") if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): raise ValueError("There must be at least one collect file or collect directory.") fullBackup = options.full logger.debug("Full backup flag is [%s]" % fullBackup) todayIsStart = isStartOfWeek(config.options.startingDay) resetDigest = fullBackup or todayIsStart logger.debug("Reset digest flag is [%s]" % resetDigest) if config.collect.collectFiles is not None: for collectFile in config.collect.collectFiles: logger.debug("Working with collect file [%s]" % collectFile.absolutePath) collectMode = _getCollectMode(config, collectFile) archiveMode = _getArchiveMode(config, collectFile) digestPath = _getDigestPath(config, collectFile.absolutePath) tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("File meets criteria to be backed up today.") _collectFile(config, collectFile.absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: logger.debug("File will not be backed up, per collect mode.") logger.info("Completed collecting file [%s]" % collectFile.absolutePath) if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: logger.debug("Working with collect directory [%s]" % collectDir.absolutePath) collectMode = _getCollectMode(config, collectDir) archiveMode = _getArchiveMode(config, collectDir) ignoreFile = _getIgnoreFile(config, collectDir) linkDepth = _getLinkDepth(collectDir) dereference = _getDereference(collectDir) recursionLevel = _getRecursionLevel(collectDir) (excludePaths, excludePatterns) = _getExclusions(config, collectDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Directory meets criteria to be backed up today.") _collectDirectory(config, collectDir.absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel) else: logger.debug("Directory will not be backed up, per collect mode.") logger.info("Completed collecting directory [%s]" % collectDir.absolutePath) writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'collect' action successfully.") ######################################################################## # Private utility functions ######################################################################## ########################## # _collectFile() function ########################## def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Collects a configured collect file. The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself. @param config: Config object. @param absolutePath: Absolute path of file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ backupList = BackupFileList() backupList.addFile(absolutePath) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) ############################### # _collectDirectory() function ############################### def _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel): """ Collects a configured collect directory. The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself. @param config: Config object. @param absolutePath: Absolute path of directory to collect. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param ignoreFile: Ignore file to use. @param linkDepth: Link depth value to use. @param dereference: Dereference flag to use. @param resetDigest: Reset digest flag. @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @param recursionLevel: Recursion level (zero for no recursion) """ if recursionLevel == 0: # Collect the actual directory because we're at recursion level 0 logger.info("Collecting directory [%s]" % absolutePath) tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) digestPath = _getDigestPath(config, absolutePath) backupList = BackupFileList() backupList.ignoreFile = ignoreFile backupList.excludePaths = excludePaths backupList.excludePatterns = excludePatterns backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: # Find all of the immediate subdirectories subdirs = FilesystemList() subdirs.excludeFiles = True subdirs.excludeLinks = True subdirs.excludePaths = excludePaths subdirs.excludePatterns = excludePatterns subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) # Back up the subdirectories separately for subdir in subdirs: _collectDirectory(config, subdir, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel-1) excludePaths.append(subdir) # this directory is already backed up, so exclude it # Back up everything that hasn't previously been backed up _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, 0) ############################ # _executeBackup() function ############################ def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Execute the backup process for the indicated backup list. This function exists mainly to consolidate functionality between the L{_collectFile} and L{_collectDirectory} functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments. For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand. @param config: Config object. @param backupList: List to execute backup for @param absolutePath: Absolute path of directory or file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ if collectMode != 'incr': logger.debug("Collect mode is [%s]; no digest will be used." % collectMode) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s)." % (absolutePath, displayBytes(backupList.totalSize()))) else: logger.info("Backing up %d files in [%s] (%s)." % (len(backupList), absolutePath, displayBytes(backupList.totalSize()))) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) else: if resetDigest: logger.debug("Based on resetDigest flag, digest will be cleared.") oldDigest = {} else: logger.debug("Based on resetDigest flag, digest will loaded from disk.") oldDigest = _loadDigest(digestPath) (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) logger.debug("Removed %d unchanged files based on digest values." % removed) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s)." % (absolutePath, displayBytes(backupList.totalSize()))) else: logger.info("Backing up %d files in [%s] (%s)." % (len(backupList), absolutePath, displayBytes(backupList.totalSize()))) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) _writeDigest(config, newDigest, digestPath) ######################### # _loadDigest() function ######################### def _loadDigest(digestPath): """ Loads the indicated digest path from disk into a dictionary. If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged. @param digestPath: Path to the digest file on disk. @return: Dictionary representing contents of digest path. """ if not os.path.isfile(digestPath): digest = {} logger.debug("Digest [%s] does not exist on disk." % digestPath) else: try: digest = pickle.load(open(digestPath, "r")) logger.debug("Loaded digest [%s] from disk: %d entries." % (digestPath, len(digest))) except: digest = {} logger.error("Failed loading digest [%s] from disk." % digestPath) return digest ########################## # _writeDigest() function ########################## def _writeDigest(config, digest, digestPath): """ Writes the digest dictionary to the indicated digest path on disk. If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param digest: Digest dictionary to write to disk. @param digestPath: Path to the digest file on disk. """ try: pickle.dump(digest, open(digestPath, "w")) changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new digest [%s] to disk: %d entries." % (digestPath, len(digest))) except: logger.error("Failed to write digest [%s] to disk." % digestPath) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # getCollectMode() function ############################ def _getCollectMode(config, item): """ Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Collect mode to use. """ if item.collectMode is None: collectMode = config.collect.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]" % collectMode) return collectMode ############################# # _getArchiveMode() function ############################# def _getArchiveMode(config, item): """ Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Archive mode to use. """ if item.archiveMode is None: archiveMode = config.collect.archiveMode else: archiveMode = item.archiveMode logger.debug("Archive mode is [%s]" % archiveMode) return archiveMode ############################ # _getIgnoreFile() function ############################ def _getIgnoreFile(config, item): """ Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Ignore file to use. """ if item.ignoreFile is None: ignoreFile = config.collect.ignoreFile else: ignoreFile = item.ignoreFile logger.debug("Ignore file is [%s]" % ignoreFile) return ignoreFile ############################ # _getLinkDepth() function ############################ def _getLinkDepth(item): """ Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Link depth to use. """ if item.linkDepth is None: linkDepth = 0 else: linkDepth = item.linkDepth logger.debug("Link depth is [%d]" % linkDepth) return linkDepth ############################ # _getDereference() function ############################ def _getDereference(item): """ Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False. @param item: C{CollectDir} object @return: Dereference flag to use. """ if item.dereference is None: dereference = False else: dereference = item.dereference logger.debug("Dereference flag is [%s]" % dereference) return dereference ################################ # _getRecursionLevel() function ################################ def _getRecursionLevel(item): """ Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Recursion level to use. """ if item.recursionLevel is None: recursionLevel = 0 else: recursionLevel = item.recursionLevel logger.debug("Recursion level is [%d]" % recursionLevel) return recursionLevel ############################ # _getDigestPath() function ############################ def _getDigestPath(config, absolutePath): """ Gets the digest path associated with a collect directory or file. @param config: Config object. @param absolutePath: Absolute path to generate digest for @return: Absolute path to the digest associated with the collect directory or file. """ normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, DIGEST_EXTENSION) digestPath = os.path.join(config.options.workingDir, filename) logger.debug("Digest path is [%s]" % digestPath) return digestPath ############################# # _getTarfilePath() function ############################# def _getTarfilePath(config, absolutePath, archiveMode): """ Gets the tarfile path (including correct extension) associated with a collect directory. @param config: Config object. @param absolutePath: Absolute path to generate tarfile for @param archiveMode: Archive mode to use for this tarfile. @return: Absolute path to the tarfile associated with the collect directory. """ if archiveMode == 'tar': extension = "tar" elif archiveMode == 'targz': extension = "tar.gz" elif archiveMode == 'tarbz2': extension = "tar.bz2" normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, extension) tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]" % tarfilePath) return tarfilePath ############################ # _getExclusions() function ############################ def _getExclusions(config, collectDir): """ Gets exclusions (file and patterns) associated with a collect directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself. @param config: Config object. @param collectDir: Collect directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if config.collect.absoluteExcludePaths is not None: paths.extend(config.collect.absoluteExcludePaths) if collectDir.absoluteExcludePaths is not None: paths.extend(collectDir.absoluteExcludePaths) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: paths.append(os.path.join(collectDir.absolutePath, relativePath)) patterns = [] if config.collect.excludePatterns is not None: patterns.extend(config.collect.excludePatterns) if collectDir.excludePatterns is not None: patterns.extend(collectDir.excludePatterns) logger.debug("Exclude paths: %s" % paths) logger.debug("Exclude patterns: %s" % patterns) return(paths, patterns) CedarBackup2-2.22.0/CedarBackup2/actions/util.py0000664000175000017500000003201512143053141022755 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: util.py 1041 2013-05-10 02:05:13Z pronovic $ # Purpose : Implements action-related utilities # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements action-related utilities @sort: findDailyDirs @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import tempfile import logging # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import changeOwnership from CedarBackup2.util import deviceMounted from CedarBackup2.writers.util import readMediaLabel from CedarBackup2.writers.cdwriter import CdWriter from CedarBackup2.writers.dvdwriter import DvdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup2.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES from CedarBackup2.actions.constants import INDICATOR_PATTERN ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.util") MEDIA_LABEL_PREFIX = "CEDAR BACKUP" ######################################################################## # Public utility functions ######################################################################## ########################### # findDailyDirs() function ########################### def findDailyDirs(stagingDir, indicatorFile): """ Returns a list of all daily staging directories that do not contain the indicated indicator file. @param stagingDir: Configured staging directory (config.targetDir) @return: List of absolute paths to daily staging directories. """ results = FilesystemList() yearDirs = FilesystemList() yearDirs.excludeFiles = True yearDirs.excludeLinks = True yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) for yearDir in yearDirs: monthDirs = FilesystemList() monthDirs.excludeFiles = True monthDirs.excludeLinks = True monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) for monthDir in monthDirs: dailyDirs = FilesystemList() dailyDirs.excludeFiles = True dailyDirs.excludeLinks = True dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) for dailyDir in dailyDirs: if os.path.exists(os.path.join(dailyDir, indicatorFile)): logger.debug("Skipping directory [%s]; contains %s." % (dailyDir, indicatorFile)) else: logger.debug("Adding [%s] to list of daily directories." % dailyDir) results.append(dailyDir) # just put it in the list, no fancy operations return results ########################### # createWriter() function ########################### def createWriter(config): """ Creates a writer object based on current configuration. This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with. Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An exception will be raised if any other device type is used. This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer. @param config: Config object. @return: Writer that can be used to write a directory to some media. @raise ValueError: If there is a problem getting the writer. @raise IOError: If there is a problem creating the writer object. """ devicePath = config.store.devicePath deviceScsiId = config.store.deviceScsiId driveSpeed = config.store.driveSpeed noEject = config.store.noEject refreshMediaDelay = config.store.refreshMediaDelay ejectDelay = config.store.ejectDelay deviceType = _getDeviceType(config) mediaType = _getMediaType(config) if deviceMounted(devicePath): raise IOError("Device [%s] is currently mounted." % (devicePath)) if deviceType == "cdwriter": return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) elif deviceType == "dvdwriter": return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) else: raise ValueError("Device type [%s] is invalid." % deviceType) ################################ # writeIndicatorFile() function ################################ def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup): """ Writes an indicator file into a target directory. @param targetDir: Target directory in which to write indicator @param indicatorFile: Name of the indicator file @param backupUser: User that indicator file should be owned by @param backupGroup: Group that indicator file should be owned by @raise IOException: If there is a problem writing the indicator file """ filename = os.path.join(targetDir, indicatorFile) logger.debug("Writing indicator file [%s]." % filename) try: open(filename, "w").write("") changeOwnership(filename, backupUser, backupGroup) except Exception, e: logger.error("Error writing [%s]: %s" % (filename, e)) raise e ############################ # getBackupFiles() function ############################ def getBackupFiles(targetDir): """ Gets a list of backup files in a target directory. Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. @param targetDir: Directory to look in @return: List of backup files in the directory @raise ValueError: If the target directory does not exist """ if not os.path.isdir(targetDir): raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) fileList = FilesystemList() fileList.excludeDirs = True fileList.excludeLinks = True fileList.excludeBasenamePatterns = INDICATOR_PATTERN fileList.addDirContents(targetDir) return fileList #################### # checkMediaState() #################### def checkMediaState(storeConfig): """ Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup. We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized. The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a C{None} media label, since this kind of media cannot safely be initialized. @param storeConfig: Store configuration @raise ValueError: If media is not initialized. """ mediaLabel = readMediaLabel(storeConfig.devicePath) if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: if mediaLabel is None: raise ValueError("Media has not been initialized: no media label available") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) else: if mediaLabel is None: logger.info("Media has no media label; assuming OK since media is not rewritable.") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) ######################### # initializeMediaState() ######################### def initializeMediaState(config): """ Initializes state of the media in the backup device so Cedar Backup can recognize it. This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label. @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. @param config: Cedar Backup configuration @raise ValueError: If media could not be initialized. @raise ValueError: If the configured media type is not rewritable """ if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: raise ValueError("Only rewritable media types can be initialized.") mediaLabel = buildMediaLabel() writer = createWriter(config) writer.refreshMedia() writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc tempdir = tempfile.mkdtemp(dir=config.options.workingDir) try: writer.addImageEntry(tempdir, "CedarBackup") writer.writeImage() finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass #################### # buildMediaLabel() #################### def buildMediaLabel(): """ Builds a media label to be used on Cedar Backup media. @return: Media label as a string. """ currentDate = time.strftime("%d-%b-%Y").upper() return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # _getDeviceType() function ############################ def _getDeviceType(config): """ Gets the device type that should be used for storing. Use the configured device type if not C{None}, otherwise use L{config.DEFAULT_DEVICE_TYPE}. @param config: Config object. @return: Device type to be used. """ if config.store.deviceType is None: deviceType = DEFAULT_DEVICE_TYPE else: deviceType = config.store.deviceType logger.debug("Device type is [%s]" % deviceType) return deviceType ########################### # _getMediaType() function ########################### def _getMediaType(config): """ Gets the media type that should be used for storing. Use the configured media type if not C{None}, otherwise use C{DEFAULT_MEDIA_TYPE}. Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:: MEDIA_CDR_74 MEDIA_CDRW_74 MEDIA_CDR_80 MEDIA_CDRW_80 MEDIA_DVDPLUSR MEDIA_DVDPLUSRW @param config: Config object. @return: Media type to be used as a writer media type value. @raise ValueError: If the media type is not valid. """ if config.store.mediaType is None: mediaType = DEFAULT_MEDIA_TYPE else: mediaType = config.store.mediaType if mediaType == "cdr-74": logger.debug("Media type is MEDIA_CDR_74.") return MEDIA_CDR_74 elif mediaType == "cdrw-74": logger.debug("Media type is MEDIA_CDRW_74.") return MEDIA_CDRW_74 elif mediaType == "cdr-80": logger.debug("Media type is MEDIA_CDR_80.") return MEDIA_CDR_80 elif mediaType == "cdrw-80": logger.debug("Media type is MEDIA_CDRW_80.") return MEDIA_CDRW_80 elif mediaType == "dvd+r": logger.debug("Media type is MEDIA_DVDPLUSR.") return MEDIA_DVDPLUSR elif mediaType == "dvd+rw": logger.debug("Media type is MEDIA_DVDPLUSRW.") return MEDIA_DVDPLUSRW else: raise ValueError("Media type [%s] is not valid." % mediaType) CedarBackup2-2.22.0/CedarBackup2/actions/initialize.py0000664000175000017500000000630711415165677024171 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: initialize.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'initialize' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'initialize' action. @sort: executeInitialize @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.actions.util import initializeMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.initialize") ######################################################################## # Public functions ######################################################################## ############################### # executeInitialize() function ############################### def executeInitialize(configPath, options, config): """ Executes the initialize action. The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. """ logger.debug("Executing the 'initialize' action.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") initializeMediaState(config) logger.info("Executed the 'initialize' action successfully.") CedarBackup2-2.22.0/CedarBackup2/actions/validate.py0000664000175000017500000002723711645137073023620 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: validate.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'validate' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'validate' action. @sort: executeValidate @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging # Cedar Backup modules from CedarBackup2.util import getUidGid, getFunctionReference from CedarBackup2.actions.util import createWriter ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.validate") ######################################################################## # Public functions ######################################################################## ############################# # executeValidate() function ############################# def executeValidate(configPath, options, config): """ Executes the validate action. This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems. There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time. Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: If some configuration value is invalid. """ logger.debug("Executing the 'validate' action.") if options.quiet: logfunc = logger.info # info so it goes to the log else: logfunc = logger.error # error so it goes to the screen valid = True valid &= _validateReference(config, logfunc) valid &= _validateOptions(config, logfunc) valid &= _validateCollect(config, logfunc) valid &= _validateStage(config, logfunc) valid &= _validateStore(config, logfunc) valid &= _validatePurge(config, logfunc) valid &= _validateExtensions(config, logfunc) if valid: logfunc("Configuration is valid.") else: logfunc("Configuration is not valid.") ######################################################################## # Private utility functions ######################################################################## ####################### # _checkDir() function ####################### def _checkDir(path, writable, logfunc, prefix): """ Checks that the indicated directory is OK. The path must exist, must be a directory, must be readable and executable, and must optionally be writable. @param path: Path to check. @param writable: Check that path is writable. @param logfunc: Function to use for logging errors. @param prefix: Prefix to use on logged errors. @return: True if the directory is OK, False otherwise. """ if not os.path.exists(path): logfunc("%s [%s] does not exist." % (prefix, path)) return False if not os.path.isdir(path): logfunc("%s [%s] is not a directory." % (prefix, path)) return False if not os.access(path, os.R_OK): logfunc("%s [%s] is not readable." % (prefix, path)) return False if not os.access(path, os.X_OK): logfunc("%s [%s] is not executable." % (prefix, path)) return False if writable and not os.access(path, os.W_OK): logfunc("%s [%s] is not writable." % (prefix, path)) return False return True ################################ # _validateReference() function ################################ def _validateReference(config, logfunc): """ Execute runtime validations on reference configuration. We only validate that reference configuration exists at all. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.reference is None: logfunc("Required reference configuration does not exist.") valid = False return valid ############################## # _validateOptions() function ############################## def _validateOptions(config, logfunc): """ Execute runtime validations on options configuration. The following validations are enforced: - The options section must exist - The working directory must exist and must be writable - The backup user and backup group must exist @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.options is None: logfunc("Required options configuration does not exist.") valid = False else: valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") try: getUidGid(config.options.backupUser, config.options.backupGroup) except ValueError: logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) valid = False return valid ############################## # _validateCollect() function ############################## def _validateCollect(config, logfunc): """ Execute runtime validations on collect configuration. The following validations are enforced: - The target directory must exist and must be writable - Each of the individual collect directories must exist and must be readable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.collect is not None: valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") return valid ############################ # _validateStage() function ############################ def _validateStage(config, logfunc): """ Execute runtime validations on stage configuration. The following validations are enforced: - The target directory must exist and must be writable - Each local peer's collect directory must exist and must be readable @note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.stage is not None: valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") if config.stage.localPeers is not None: for peer in config.stage.localPeers: valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") return valid ############################ # _validateStore() function ############################ def _validateStore(config, logfunc): """ Execute runtime validations on store configuration. The following validations are enforced: - The source directory must exist and must be readable - The backup device (path and SCSI device) must be valid @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.store is not None: valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") try: createWriter(config) except ValueError: logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) valid = False return valid ############################ # _validatePurge() function ############################ def _validatePurge(config, logfunc): """ Execute runtime validations on purge configuration. The following validations are enforced: - Each purge directory must exist and must be writable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.purge is not None: if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") return valid ################################# # _validateExtensions() function ################################# def _validateExtensions(config, logfunc): """ Execute runtime validations on extensions configuration. The following validations are enforced: - Each indicated extension function must exist. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.extensions is not None: if config.extensions.actions is not None: for action in config.extensions.actions: try: getFunctionReference(action.module, action.function) except ImportError: logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) valid = False except ValueError: logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) valid = False return valid CedarBackup2-2.22.0/CedarBackup2/actions/__init__.py0000664000175000017500000000336111415155732023553 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup actions. This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate). The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants. All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.actions import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] CedarBackup2-2.22.0/CedarBackup2/actions/constants.py0000664000175000017500000000266111415155732024032 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: constants.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides common constants used by standard actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides common constants used by standard actions. @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR @author: Kenneth J. Pronovici """ ######################################################################## # Module-wide constants and variables ######################################################################## DIR_TIME_FORMAT = "%Y/%m/%d" DIGEST_EXTENSION = "sha" INDICATOR_PATTERN = [ "cback\..*", ] COLLECT_INDICATOR = "cback.collect" STAGE_INDICATOR = "cback.stage" STORE_INDICATOR = "cback.store" CedarBackup2-2.22.0/CedarBackup2/actions/stage.py0000664000175000017500000003045111415165677023130 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: stage.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'stage' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'stage' action. @sort: executeStage @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import logging # Cedar Backup modules from CedarBackup2.peer import RemotePeer, LocalPeer from CedarBackup2.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup2.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.stage") ######################################################################## # Public functions ######################################################################## ########################## # executeStage() function ########################## def executeStage(configPath, options, config): """ Executes the stage backup action. @note: The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite. @note: As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'stage' action.") if config.options is None or config.stage is None: raise ValueError("Stage configuration is not properly filled in.") dailyDir = _getDailyDir(config) localPeers = _getLocalPeers(config) remotePeers = _getRemotePeers(config) allPeers = localPeers + remotePeers stagingDirs = _createStagingDirs(config, dailyDir, allPeers) for peer in allPeers: logger.info("Staging peer [%s]." % peer.name) ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) if not peer.checkCollectIndicator(): if not ignoreFailures: logger.error("Peer [%s] was not ready to be staged." % peer.name) else: logger.info("Peer [%s] was not ready to be staged." % peer.name) continue logger.debug("Found collect indicator.") targetDir = stagingDirs[peer.name] if isRunningAsRoot(): # Since we're running as root, we can change ownership ownership = getUidGid(config.options.backupUser, config.options.backupGroup) logger.debug("Using target dir [%s], ownership [%d:%d]." % (targetDir, ownership[0], ownership[1])) else: # Non-root cannot change ownership, so don't set it ownership = None logger.debug("Using target dir [%s], ownership [None]." % targetDir) try: count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask logger.info("Staged %d files for peer [%s]." % (count, peer.name)) peer.writeStageIndicator() except (ValueError, IOError, OSError), e: logger.error("Error staging [%s]: %s" % (peer.name, e)) writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'stage' action successfully.") ######################################################################## # Private utility functions ######################################################################## ################################ # _createStagingDirs() function ################################ def _createStagingDirs(config, dailyDir, peers): """ Creates staging directories as required. The main staging directory is the passed in daily directory, something like C{staging/2002/05/23}. Then, individual peers get their own directories, i.e. C{staging/2002/05/23/host}. @param config: Config object. @param dailyDir: Daily staging directory. @param peers: List of all configured peers. @return: Dictionary mapping peer name to staging directory. """ mapping = {} if os.path.isdir(dailyDir): logger.warn("Staging directory [%s] already existed." % dailyDir) else: try: logger.debug("Creating staging directory [%s]." % dailyDir) os.makedirs(dailyDir) for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: changeOwnership(path, config.options.backupUser, config.options.backupGroup) except Exception, e: raise Exception("Unable to create staging directory: %s" % e) for peer in peers: peerDir = os.path.join(dailyDir, peer.name) mapping[peer.name] = peerDir if os.path.isdir(peerDir): logger.warn("Peer staging directory [%s] already existed." % peerDir) else: try: logger.debug("Creating peer staging directory [%s]." % peerDir) os.makedirs(peerDir) changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) except Exception, e: raise Exception("Unable to create staging directory: %s" % e) return mapping ######################################################################## # Private attribute "getter" functions ######################################################################## #################################### # _getIgnoreFailuresFlag() function #################################### def _getIgnoreFailuresFlag(options, config, peer): """ Gets the ignore failures flag based on options, configuration, and peer. @param options: Options object @param config: Configuration object @param peer: Peer to check @return: Whether to ignore stage failures for this peer """ logger.debug("Ignore failure mode for this peer: %s" % peer.ignoreFailureMode) if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": return False elif peer.ignoreFailureMode == "all": return True else: if options.full or isStartOfWeek(config.options.startingDay): return peer.ignoreFailureMode == "weekly" else: return peer.ignoreFailureMode == "daily" ########################## # _getDailyDir() function ########################## def _getDailyDir(config): """ Gets the daily staging directory. This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. C{staging/2000/10/07}, except it will be an absolute path based on C{config.stage.targetDir}. @param config: Config object @return: Path of daily staging directory. """ dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) logger.debug("Daily staging directory is [%s]." % dailyDir) return dailyDir ############################ # _getLocalPeers() function ############################ def _getLocalPeers(config): """ Return a list of L{LocalPeer} objects based on configuration. @param config: Config object. @return: List of L{LocalPeer} objects. """ localPeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of local peers from stage configuration.") configPeers = config.stage.localPeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of local peers from peers configuration.") configPeers = config.peers.localPeers if configPeers is not None: for peer in configPeers: localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) localPeers.append(localPeer) logger.debug("Found local peer: [%s]" % localPeer.name) return localPeers ############################# # _getRemotePeers() function ############################# def _getRemotePeers(config): """ Return a list of L{RemotePeer} objects based on configuration. @param config: Config object. @return: List of L{RemotePeer} objects. """ remotePeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of remote peers from stage configuration.") configPeers = config.stage.remotePeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of remote peers from peers configuration.") configPeers = config.peers.remotePeers if configPeers is not None: for peer in configPeers: remoteUser = _getRemoteUser(config, peer) localUser = _getLocalUser(config) rcpCommand = _getRcpCommand(config, peer) remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, remoteUser, rcpCommand, localUser, ignoreFailureMode=peer.ignoreFailureMode) remotePeers.append(remotePeer) logger.debug("Found remote peer: [%s]" % remotePeer.name) return remotePeers ############################ # _getRemoteUser() function ############################ def _getRemoteUser(config, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return config.options.backupUser return remotePeer.remoteUser ########################### # _getLocalUser() function ########################### def _getLocalUser(config): """ Gets the remote user associated with a remote peer. @param config: Config object. @return: Name of local user that should be used """ if not isRunningAsRoot(): return None return config.options.backupUser ############################ # _getRcpCommand() function ############################ def _getRcpCommand(config, remotePeer): """ Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: RCP command associated with remote peer. """ if remotePeer.rcpCommand is None: return config.options.rcpCommand return remotePeer.rcpCommand CedarBackup2-2.22.0/CedarBackup2/actions/purge.py0000664000175000017500000000711111415165677023144 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: purge.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'purge' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'purge' action. @sort: executePurge @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.filesystem import PurgeItemList ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.purge") ######################################################################## # Public functions ######################################################################## ########################## # executePurge() function ########################## def executePurge(configPath, options, config): """ Executes the purge backup action. For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions """ logger.debug("Executing the 'purge' action.") if config.options is None or config.purge is None: raise ValueError("Purge configuration is not properly filled in.") if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: purgeList = PurgeItemList() purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged purgeList.purgeItems() # remove remaining items from the filesystem logger.info("Executed the 'purge' action successfully.") CedarBackup2-2.22.0/CedarBackup2/actions/rebuild.py0000664000175000017500000001437211415165677023457 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: rebuild.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Implements the standard 'rebuild' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'rebuild' action. @sort: executeRebuild @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime # Cedar Backup modules from CedarBackup2.util import deriveDayOfWeek from CedarBackup2.actions.util import checkMediaState from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup2.actions.store import writeImage, writeStoreIndicator, consistencyCheck ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.rebuild") ######################################################################## # Public functions ######################################################################## ############################ # executeRebuild() function ############################ def executeRebuild(configPath, options, config): """ Executes the rebuild backup action. This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action. Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'rebuild' action.") if sys.platform == "darwin": logger.warn("Warning: the rebuild action is not fully supported on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Rebuild configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized stagingDirs = _findRebuildDirs(config) writeImage(config, True, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'rebuild' action successfully.") ######################################################################## # Private utility functions ######################################################################## ############################## # _findRebuildDirs() function ############################## def _findRebuildDirs(config): """ Finds the set of directories to be included in a disc rebuild. A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If we do not find at least one staging directory. """ stagingDirs = {} start = deriveDayOfWeek(config.options.startingDay) today = datetime.date.today() if today.weekday() >= start: days = today.weekday() - start + 1 else: days = 7 - (start - today.weekday()) + 1 for i in range (0, days): currentDay = today - datetime.timedelta(days=i) dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) stageDir = os.path.join(config.store.sourceDir, dateSuffix) indicator = os.path.join(stageDir, STAGE_INDICATOR) if os.path.isdir(stageDir) and os.path.exists(indicator): logger.info("Rebuild process will include stage directory [%s]" % stageDir) stagingDirs[stageDir] = dateSuffix if len(stagingDirs) == 0: raise IOError("Unable to find any staging directories for rebuild process.") return stagingDirs CedarBackup2-2.22.0/CedarBackup2/writers/0002775000175000017500000000000012143054371021474 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/CedarBackup2/writers/util.py0000664000175000017500000006655511415165677023057 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: util.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides utilities related to image writers. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides utilities related to image writers. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.util") MKISOFS_COMMAND = [ "mkisofs", ] VOLNAME_COMMAND = [ "volname", ] ######################################################################## # Functions used to portably validate certain kinds of values ######################################################################## ############################ # validateDevice() function ############################ def validateDevice(device, unittest=False): """ Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk. @param device: Filesystem device path. @param unittest: Indicates whether we're unit testing. @return: Device as a string, for instance C{"/dev/cdrw"} @raise ValueError: If the device value is invalid. @raise ValueError: If some path cannot be encoded properly. """ if device is None: raise ValueError("Device must be filled in.") device = encodePath(device) if not os.path.isabs(device): raise ValueError("Backup device must be an absolute path.") if not unittest and not os.path.exists(device): raise ValueError("Backup device must exist on disk.") if not unittest and not os.access(device, os.W_OK): raise ValueError("Backup device is not writable by the current user.") return device ############################ # validateScsiId() function ############################ def validateScsiId(scsiId): """ Validates a SCSI id string. SCSI id must be a string in the form C{[:]scsibus,target,lun}. For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param scsiId: SCSI id for the device. @return: SCSI id as a string, for instance C{"ATA:1,0,0"} @raise ValueError: If the SCSI id string is invalid. """ if scsiId is not None: pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") if not pattern.search(scsiId): pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") if not pattern.search(scsiId): raise ValueError("SCSI id is not in a valid form.") return scsiId ################################ # validateDriveSpeed() function ################################ def validateDriveSpeed(driveSpeed): """ Validates a drive speed value. Drive speed must be an integer which is >= 1. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param driveSpeed: Speed at which the drive writes. @return: Drive speed as an integer @raise ValueError: If the drive speed value is invalid. """ if driveSpeed is None: return None try: intSpeed = int(driveSpeed) except TypeError: raise ValueError("Drive speed must be an integer >= 1.") if intSpeed < 1: raise ValueError("Drive speed must an integer >= 1.") return intSpeed ######################################################################## # General writer-related utility functions ######################################################################## ############################ # readMediaLabel() function ############################ def readMediaLabel(devicePath): """ Reads the media label (volume name) from the indicated device. The volume name is read using the C{volname} command. @param devicePath: Device path to read from @return: Media label as a string, or None if there is no name or it could not be read. """ args = [ devicePath, ] command = resolveCommand(VOLNAME_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: return None if output is None or len(output) < 1: return None return output[0].rstrip() ######################################################################## # IsoImage class definition ######################################################################## class IsoImage(object): ###################### # Class documentation ###################### """ Represents an ISO filesystem image. Summary ======= This object represents an ISO 9660 filesystem image. It is implemented in terms of the C{mkisofs} program, which has been ported to many operating systems and platforms. A "sensible subset" of the C{mkisofs} functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image. By default, the image is created using the Rock Ridge protocol (using the C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default C{mkisofs} functionality by setting the C{useRockRidge} instance variable to C{False}. Note, however, that this option is not well-tested. Where Files and Directories are Placed in the Image =================================================== Although this class is implemented in terms of the C{mkisofs} program, its standard "image contents" semantics are slightly different than the original C{mkisofs} semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact. As an example, suppose you add the file C{/etc/profile} to your image and you do not configure a graft point. The file C{/profile} will be created in the image. The behavior for directories is similar. For instance, suppose that you add C{/etc/X11} to the image and do not configure a graft point. In this case, the directory C{/X11} will be created in the image, even if the original C{/etc/X11} directory is empty. I{This behavior differs from the standard C{mkisofs} behavior!} If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of C{base} when adding C{/etc/profile} and C{/etc/X11} to your image. In this case, the file C{/base/profile} and the directory C{/base/X11} would be added to the image. I feel that this behavior is more consistent than the original C{mkisofs} behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the C{contentsOnly} parameter to the L{addEntry} method can be used to revert to the original behavior if desired. @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, device, boundaries, graftPoint, useRockRidge, applicationId, biblioFile, publisherId, preparerId, volumeId """ ############## # Constructor ############## def __init__(self, device=None, boundaries=None, graftPoint=None): """ Initializes an empty ISO image object. Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object. The device and boundaries values are both required in order to write multisession discs. If either is missing or C{None}, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a L{writer.MediaCapacity} object. @param device: Name of the device that the image will be written to @type device: Either be a filesystem path or a SCSI address @param boundaries: Session boundaries as required by C{mkisofs} @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} @param graftPoint: Default graft point for this page. @type graftPoint: String representing a graft point path (see L{addEntry}). """ self._device = None self._boundaries = None self._graftPoint = None self._useRockRidge = True self._applicationId = None self._biblioFile = None self._publisherId = None self._preparerId = None self._volumeId = None self.entries = { } self.device = device self.boundaries = boundaries self.graftPoint = graftPoint self.useRockRidge = True self.applicationId = None self.biblioFile = None self.publisherId = None self.preparerId = None self.volumeId = None logger.debug("Created new ISO image object.") ############# # Properties ############# def _setDevice(self, value): """ Property target used to set the device value. If not C{None}, the value can be either an absolute path or a SCSI id. @raise ValueError: If the value is not valid """ try: if value is None: self._device = None else: if os.path.isabs(value): self._device = value else: self._device = validateScsiId(value) except ValueError: raise ValueError("Device must either be an absolute path or a valid SCSI id.") def _getDevice(self): """ Property target used to get the device value. """ return self._device def _setBoundaries(self, value): """ Property target used to set the boundaries tuple. If not C{None}, the value must be a tuple of two integers. @raise ValueError: If the tuple values are not integers. @raise IndexError: If the tuple does not contain enough elements. """ if value is None: self._boundaries = None else: self._boundaries = (int(value[0]), int(value[1])) def _getBoundaries(self): """ Property target used to get the boundaries value. """ return self._boundaries def _setGraftPoint(self, value): """ Property target used to set the graft point. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The graft point must be a non-empty string.") self._graftPoint = value def _getGraftPoint(self): """ Property target used to get the graft point. """ return self._graftPoint def _setUseRockRidge(self, value): """ Property target used to set the use RockRidge flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._useRockRidge = True else: self._useRockRidge = False def _getUseRockRidge(self): """ Property target used to get the use RockRidge flag. """ return self._useRockRidge def _setApplicationId(self, value): """ Property target used to set the application id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The application id must be a non-empty string.") self._applicationId = value def _getApplicationId(self): """ Property target used to get the application id. """ return self._applicationId def _setBiblioFile(self, value): """ Property target used to set the biblio file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The biblio file must be a non-empty string.") self._biblioFile = value def _getBiblioFile(self): """ Property target used to get the biblio file. """ return self._biblioFile def _setPublisherId(self, value): """ Property target used to set the publisher id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The publisher id must be a non-empty string.") self._publisherId = value def _getPublisherId(self): """ Property target used to get the publisher id. """ return self._publisherId def _setPreparerId(self, value): """ Property target used to set the preparer id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The preparer id must be a non-empty string.") self._preparerId = value def _getPreparerId(self): """ Property target used to get the preparer id. """ return self._preparerId def _setVolumeId(self, value): """ Property target used to set the volume id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The volume id must be a non-empty string.") self._volumeId = value def _getVolumeId(self): """ Property target used to get the volume id. """ return self._volumeId device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") ######################### # General public methods ######################### def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False): """ Adds an individual file or directory into the ISO image. The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the C{graftPoint} parameter or instance variable. You can use the C{contentsOnly} behavior to revert to the "original" C{mkisofs} behavior for adding directories, which is to add only the items within the directory, and not the directory itself. @note: Things get I{odd} if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess. @note: An exception will be thrown if the path has already been added to the image, unless the C{override} parameter is set to C{True}. @note: The method C{graftPoints} parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect I{at the time this method is called}, so you I{must} set the object-wide value before calling this method for the first time, or your image may not be consistent. @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" an object-wide instance variable by setting it to C{None}. Python's default argument functionality buys us a lot, but it can't make this method psychic. :) @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @param override: Override an existing entry with the same path. @type override: Boolean true/false @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). @type contentsOnly: Boolean true/false @raise ValueError: If path is not a file or directory, or does not exist. @raise ValueError: If the path has already been added, and override is not set. @raise ValueError: If a path cannot be encoded properly. """ path = encodePath(path) if not override: if path in self.entries.keys(): raise ValueError("Path has already been added to the image.") if os.path.islink(path): raise ValueError("Path must not be a link.") if os.path.isdir(path): if graftPoint is not None: if contentsOnly: self.entries[path] = graftPoint else: self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) elif self.graftPoint is not None: if contentsOnly: self.entries[path] = self.graftPoint else: self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) else: if contentsOnly: self.entries[path] = None else: self.entries[path] = os.path.basename(path) elif os.path.isfile(path): if graftPoint is not None: self.entries[path] = graftPoint elif self.graftPoint is not None: self.entries[path] = self.graftPoint else: self.entries[path] = None else: raise ValueError("Path must be a file or a directory.") def getEstimatedSize(self): """ Returns the estimated size (in bytes) of the ISO image. This is implemented via the C{-print-size} option to C{mkisofs}, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If there are no filesystem entries in the image """ if len(self.entries.keys()) == 0: raise ValueError("Image does not contain any entries.") return self._getEstimatedSize(self.entries) def _getEstimatedSize(self, entries): """ Returns the estimated size (in bytes) for the passed-in entries dictionary. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. """ args = self._buildSizeArgs(entries) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing mkisofs command to estimate size." % result) if len(output) != 1: raise IOError("Unable to parse mkisofs output.") try: sectors = float(output[0]) size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) return size except: raise IOError("Unable to parse mkisofs output.") def writeImage(self, imagePath): """ Writes this image to disk using the image path. @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ imagePath = encodePath(imagePath) if len(self.entries.keys()) == 0: raise ValueError("Image does not contain any entries.") args = self._buildWriteArgs(self.entries, imagePath) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=False) if result != 0: raise IOError("Error (%d) executing mkisofs command to build image." % result) ######################################### # Methods used to build mkisofs commands ######################################### @staticmethod def _buildDirEntries(entries): """ Uses an entries dictionary to build a list of directory locations for use by C{mkisofs}. We build a list of entries that can be passed to C{mkisofs}. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any. @param entries: Dictionary of image entries (i.e. self.entries) @return: List of directory locations for use by C{mkisofs} """ dirEntries = [] for key in entries.keys(): if entries[key] is None: dirEntries.append(key) else: dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) return dirEntries def _buildGeneralArgs(self): """ Builds a list of general arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] if self.applicationId is not None: args.append("-A") args.append(self.applicationId) if self.biblioFile is not None: args.append("-biblio") args.append(self.biblioFile) if self.publisherId is not None: args.append("-publisher") args.append(self.publisherId) if self.preparerId is not None: args.append("-p") args.append(self.preparerId) if self.volumeId is not None: args.append("-V") args.append(self.volumeId) return args def _buildSizeArgs(self, entries): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the C{-print-size} option), rather than an image file on disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-print-size") args.append("-graft-points") if self.useRockRidge: args.append("-r") if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args def _buildWriteArgs(self, entries, imagePath): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-graft-points") if self.useRockRidge: args.append("-r") args.append("-o") args.append(imagePath) if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args CedarBackup2-2.22.0/CedarBackup2/writers/cdwriter.py0000664000175000017500000015200212143053141023661 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: cdwriter.py 1041 2013-05-10 02:05:13Z pronovic $ # Purpose : Provides functionality related to CD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to CD writer devices. @sort: MediaDefinition, MediaCapacity, CdWriter, MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, displayBytes, encodePath from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES from CedarBackup2.writers.util import validateDevice, validateScsiId, validateDriveSpeed from CedarBackup2.writers.util import IsoImage ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.cdwriter") MEDIA_CDRW_74 = 1 MEDIA_CDR_74 = 2 MEDIA_CDRW_80 = 3 MEDIA_CDR_80 = 4 CDRECORD_COMMAND = [ "cdrecord", ] EJECT_COMMAND = [ "eject", ] MKISOFS_COMMAND = [ "mkisofs", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about CD media definitions. The following media types are accepted: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Note that all of the capacities associated with a media definition are in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._initialLeadIn = 0. self._leadIn = 0.0 self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType self._initialLeadIn = 11400.0 # per cdrecord's documentation self._leadIn = 6900.0 # per cdrecord's documentation if self._mediaType == MEDIA_CDR_74: self._rewritable = False self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_74: self._rewritable = True self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDR_80: self._rewritable = False self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_80: self._rewritable = True self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getInitialLeadIn(self): """ Property target used to get the initial lead-in value. """ return self._initialLeadIn def _getLeadIn(self): """ Property target used to get the lead-in value. """ return self._leadIn def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about CD media capacity. Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in. The boundaries value is either C{None} (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable, boundaries): """ Initializes a capacity object. @raise IndexError: If the boundaries tuple does not have enough elements. @raise ValueError: If the boundaries values are not integers. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) if boundaries is None: self._boundaries = None else: self._boundaries = (int(boundaries[0]), int(boundaries[1])) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target to get the bytes-available value. """ return self._bytesAvailable def _getBoundaries(self): """ Property target to get the boundaries tuple. """ return self._boundaries def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # CdWriter class definition ######################################################################## class CdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write CD media. Summary ======= This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc. This class is implemented in terms of the C{eject} and C{cdrecord} programs, both of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Media Types =========== This class knows how to write to two different kinds of media, represented by the following constants: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be. I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard. Device Attributes vs. Media Attributes ====================================== A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Talking to Hardware =================== This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray. Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0. When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. C{/dev/cdrw}. Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the C{dev=} argument. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_CDRW_74, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a CD writer object. The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called. The various instance variables such as C{deviceType}, C{deviceVendor}, etc. might be C{None}, if we're unable to parse this specific information from the C{cdrecord} output. This information is just for reference. The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device. If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in C{noEject=False}. This will override the properties and the device will never be ejected. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} @param scsiId: SCSI id for the device (optional). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Overrides properties to indicate that the device does not support eject. @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. @raise IOError: If device properties could not be read for some reason. """ self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = validateScsiId(scsiId) self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._noEject = noEject self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if not unittest: (self._deviceType, self._deviceVendor, self._deviceId, self._deviceBufferSize, self._deviceSupportsMulti, self._deviceHasTray, self._deviceCanEject) = self._retrieveProperties() ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ if self._scsiId is None: return self._device return self._scsiId def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _getDeviceVendor(self): """ Property target used to get the device vendor. """ return self._deviceVendor def _getDeviceId(self): """ Property target used to get the device id. """ return self._deviceId def _getDeviceBufferSize(self): """ Property target used to get the device buffer size. """ return self._deviceBufferSize def _getDeviceSupportsMulti(self): """ Property target used to get the device-support-multi flag. """ return self._deviceSupportsMulti def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[:]scsibus,target,lun}.") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def _retrieveProperties(self): """ Retrieves properties for a device from C{cdrecord}. The results are returned as a tuple of the object device attributes as returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @return: Results tuple as described above. @raise IOError: If there is a problem talking to the device. """ args = CdWriter._buildPropertiesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing cdrecord command to get properties." % result) return CdWriter._parsePropertiesOutput(output) def retrieveCapacity(self, entireDisc=False, useMulti=True): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True} the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if C{useMulti} is passed in as C{False}, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be C{None}. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise IOError: If the media could not be read for some reason. """ boundaries = self._getBoundaries(entireDisc, useMulti) return CdWriter._calculateCapacity(self._media, boundaries) def _getBoundaries(self, entireDisc=False, useMulti=True): """ Gets the ISO boundaries for the media. If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be C{None}. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: Boundaries tuple or C{None}, as described above. @raise IOError: If the media could not be read for some reason. """ if not self._deviceSupportsMulti: logger.debug("Device does not support multisession discs; returning boundaries None.") return None elif not useMulti: logger.debug("Use multisession flag is False; returning boundaries None.") return None elif entireDisc: logger.debug("Entire disc flag is True; returning boundaries None.") return None else: args = CdWriter._buildBoundariesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: logger.debug("Error (%d) executing cdrecord command to get capacity." % result) logger.warn("Unable to read disc (might not be initialized); returning boundaries of None.") return None boundaries = CdWriter._parseBoundariesOutput(output) if boundaries is None: logger.debug("Returning disc boundaries: None") else: logger.debug("Returning disc boundaries: (%d, %d)" % (boundaries[0], boundaries[1])) return boundaries @staticmethod def _calculateCapacity(media, boundaries): """ Calculates capacity for the media in terms of boundaries. If C{boundaries} is C{None} or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc. @param media: MediaDescription object describing the media capacity. @param boundaries: Session boundaries as returned from L{_getBoundaries}. @return: C{MediaCapacity} object describing the capacity of the media. """ if boundaries is None or boundaries[1] == 0: logger.debug("Capacity calculations are based on a complete disc rewrite.") sectorsAvailable = media.capacity - media.initialLeadIn if sectorsAvailable < 0: sectorsAvailable = 0 bytesUsed = 0 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) else: logger.debug("Capacity calculations are based on a new ISO session.") sectorsAvailable = media.capacity - boundaries[1] - media.leadIn if sectorsAvailable < 0: sectorsAvailable = 0 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) logger.debug("Used [%s], available [%s]." % (displayBytes(bytesUsed), displayBytes(bytesAvailable))) return MediaCapacity(bytesUsed, bytesAvailable, boundaries) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") image = IsoImage() for path in self._image.entries.keys(): image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) return image.getEstimatedSize() ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildOpenTrayArgs(self._device) result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray. @raise IOError: If there is an error talking to the device. """ args = CdWriter._buildUnlockTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildCloseTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable). If C{writeMulti} is passed in as C{True}, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs). if C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored. By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.) @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the entire disc will overwritten. @type newDisc: Boolean true/false. @param writeMulti: Indicates whether a multisession disc should be written, if possible. @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") try: imagePath = self._createImage() self._writeImage(imagePath, writeMulti, self._image.newDisc) finally: if imagePath is not None and os.path.exists(imagePath): try: os.unlink(imagePath) except: pass else: imagePath = encodePath(imagePath) if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") self._writeImage(imagePath, writeMulti, newDisc) def _createImage(self): """ Creates an ISO image based on configuration in self._image. @return: Path to the newly-created ISO image on disk. @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ path = None capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) image = IsoImage(self.device, capacity.boundaries) image.volumeId = self._image.mediaLabel # may be None, which is also valid for key in self._image.entries.keys(): image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) size = image.getEstimatedSize() logger.info("Image size will be %s." % displayBytes(size)) available = capacity.bytesAvailable logger.debug("Media capacity: %s" % displayBytes(available)) if size > available: logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) raise IOError("Media does not contain enough capacity to store image.") try: (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) try: os.close(handle) except: pass image.writeImage(path) logger.debug("Completed creating image [%s]." % path) return path except Exception, e: if path is not None and os.path.exists(path): try: os.unlink(path) except: pass raise e def _writeImage(self, imagePath, writeMulti, newDisc): """ Write an ISO image to disc using cdrecord. The disc is blanked first if C{newDisc} is C{True}. @param imagePath: Path to an ISO image on disk @param writeMulti: Indicates whether a multisession disc should be written, if possible. @param newDisc: Indicates whether the entire disc will overwritten. """ if newDisc: self._blankMedia() args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() def _blankMedia(self): """ Blanks the media in the device, if the media is rewritable. @raise IOError: If the media could not be written to for some reason. """ if self.isRewritable(): args = CdWriter._buildBlankArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to blank disc." % result) self.refreshMedia() ####################################### # Methods used to parse command output ####################################### @staticmethod def _parsePropertiesOutput(output): """ Parses the output from a C{cdrecord} properties command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildPropertiesArgs}. The list of strings will be parsed to yield information about the properties of the device. The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag C{True} if a matching line is found. All of the parsing is done with regular expressions. Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference. The results are returned as a tuple of the object device attributes: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @param output: Output from a C{cdrecord -prcap} command. @return: Results tuple as described above. @raise IOError: If there is problem parsing the output. """ deviceType = None deviceVendor = None deviceId = None deviceBufferSize = None deviceSupportsMulti = False deviceHasTray = False deviceCanEject = False typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") multiPattern = re.compile(r"^\s*Does read multi-session.*$") trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") ejectPattern = re.compile(r"^\s*Does support ejection.*$") for line in output: if typePattern.search(line): deviceType = typePattern.search(line).group(2) logger.info("Device type is [%s]." % deviceType) elif vendorPattern.search(line): deviceVendor = vendorPattern.search(line).group(2) logger.info("Device vendor is [%s]." % deviceVendor) elif idPattern.search(line): deviceId = idPattern.search(line).group(2) logger.info("Device id is [%s]." % deviceId) elif bufferPattern.search(line): try: sectors = int(bufferPattern.search(line).group(2)) deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) logger.info("Device buffer size is [%d] bytes." % deviceBufferSize) except TypeError: pass elif multiPattern.search(line): deviceSupportsMulti = True logger.info("Device does support multisession discs.") elif trayPattern.search(line): deviceHasTray = True logger.info("Device has a tray.") elif ejectPattern.search(line): deviceCanEject = True logger.info("Device can eject its media.") return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) @staticmethod def _parseBoundariesOutput(output): """ Parses the output from a C{cdrecord} capacity command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildBoundaryArgs}. The list of strings will be parsed to yield information about the capacity of the media in the device. Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore C{stderr} when executing the C{cdrecord} command that generates output for this method, because sometimes C{cdrecord} spits out kernel warnings about the actual output. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @note: If the boundaries output can't be parsed, we return C{None}. @param output: Output from a C{cdrecord -msinfo} command. @return: Boundaries tuple as described above. @raise IOError: If there is problem parsing the output. """ if len(output) < 1: logger.warn("Unable to read disc (might not be initialized); returning full capacity.") return None boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") parsed = boundaryPattern.search(output[0]) if not parsed: raise IOError("Unable to parse output of boundaries command.") try: boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) except TypeError: raise IOError("Unable to parse output of boundaries command.") return boundaries ################################# # Methods used to build commands ################################# @staticmethod def _buildOpenTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append(device) return args @staticmethod def _buildUnlockTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to unlock the tray. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-i") args.append("off") args.append(device) return args @staticmethod def _buildCloseTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-t") args.append(device) return args @staticmethod def _buildPropertiesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for a list of its capacities via the C{-prcap} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-prcap") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBoundariesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for the current multisession boundaries of the media using the C{-msinfo} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-msinfo") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBlankArgs(hardwareId, driveSpeed=None): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to blank the media in the device identified by C{hardwareId}. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param driveSpeed: Speed at which the drive writes. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") args.append("blank=fast") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) return args @staticmethod def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to write the indicated ISO image (C{imagePath}) to the media in the device identified by C{hardwareId}. The C{writeMulti} argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param imagePath: Path to an ISO image on disk. @param driveSpeed: Speed at which the drive writes. @param writeMulti: Indicates whether to write a multisession disc. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) if writeMulti: args.append("-multi") args.append("-data") args.append(imagePath) return args CedarBackup2-2.22.0/CedarBackup2/writers/__init__.py0000664000175000017500000000253311415155732023612 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup writers. This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.writers import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] CedarBackup2-2.22.0/CedarBackup2/writers/dvdwriter.py0000664000175000017500000012014112143053141024047 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: dvdwriter.py 1041 2013-05-10 02:05:13Z pronovic $ # Purpose : Provides functionality related to DVD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to DVD writer devices. @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW @var MEDIA_DVDPLUSR: Constant representing DVD+R media. @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup2.writers.util import IsoImage from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, displayBytes, encodePath from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES from CedarBackup2.writers.util import validateDevice, validateDriveSpeed ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter") MEDIA_DVDPLUSR = 1 MEDIA_DVDPLUSRW = 2 GROWISOFS_COMMAND = [ "growisofs", ] EJECT_COMMAND = [ "eject", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about DVD media definitions. The following media types are accepted: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) Note that the capacity attribute returns capacity in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer functionality. The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. @sort: __init__, mediaType, rewritable, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType if self._mediaType == MEDIA_DVDPLUSR: self._rewritable = False self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB elif self._mediaType == MEDIA_DVDPLUSRW: self._rewritable = True self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about DVD media capacity. Space used and space available do not include any information about media lead-in or other overhead. @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable): """ Initializes a capacity object. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target used to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target available to get the bytes-available value. """ return self._bytesAvailable def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # DvdWriter class definition ######################################################################## class DvdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write some kinds of DVD media. Summary ======= This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media. This class is implemented in terms of the C{eject} and C{growisofs} utilities, all of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers:: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used. Media Types =========== This class knows how to write to DVD+R and DVD+RW media, represented by the following constants: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, C{DvdWriter} does not really differentiate between rewritable and non-rewritable media). The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. The underlying C{growisofs} utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Device Attributes vs. Media Attributes ====================================== As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes. Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way C{growisofs} works relative to C{cdrecord}. Media Capacity ============== One major difference between the C{cdrecord}/C{mkisofs} utilities used by the cdwriter class and the C{growisofs} utility used here is that the process of estimating remaining capacity and image size is more straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. In this class, remaining capacity is calculated by asking doing a dry run of C{growisofs} and grabbing some information from the output of that command. Image size is estimated by asking the C{IsoImage} class for an estimate and then adding on a "fudge factor" determined through experimentation. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_DVDPLUSRW, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a DVD writer object. Since C{growisofs} can only address devices using the device path (i.e. C{/dev/dvd}), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only. We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the C{noEject} flag is used to set these values. If C{noEject=False}, then we assume a tray exists and open/close is safe. If C{noEject=True}, then we assume that there is no tray and open/close is not safe. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} @param scsiId: SCSI id for the device (optional, for reference only). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Tells Cedar Backup that the device cannot safely be ejected @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. """ if scsiId is not None: logger.warn("SCSI id [%s] will be ignored by DvdWriter." % scsiId) self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = scsiId # not validated, because it's just for reference self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if noEject: self._deviceHasTray = False self._deviceCanEject = False else: self._deviceHasTray = True # just assume self._deviceCanEject = True # just assume ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ return self._device def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def retrieveCapacity(self, entireDisc=False): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True}, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by C{growisofs} itself. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise ValueError: If there is a problem parsing the C{growisofs} output @raise IOError: If the media could not be read for some reason. """ sectorsUsed = 0 if not entireDisc: sectorsUsed = self._retrieveSectorsUsed() sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) return MediaCapacity(bytesUsed, bytesAvailable) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called @raise ValueError: If the path is not a valid file or directory """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") return DvdWriter._getEstimatedImageSize(self._image.entries) ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ self.device, ] result = executeCommand(command, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray via 'eject -i off'. @raise IOError: If there is an error talking to the device. """ command = resolveCommand(EJECT_COMMAND) args = [ "-i", "off", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ "-t", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be re-created from scratch. Note that unlike C{CdWriter}, C{DvdWriter} does not blank rewritable media before reusing it; however, C{growisofs} is called such that the media will be re-initialized as needed. If C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage()} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored and the value passed in to C{initializeImage()} will apply instead. The C{writeMulti} argument is ignored. It exists for compatibility with the Cedar Backup image writer interface. @note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that C{dvdwriter} will use. @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param writeMulti: Unused @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if not writeMulti: logger.warn("writeMulti value of [%s] ignored." % writeMulti) if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") size = self.getEstimatedImageSize() logger.info("Image size will be %s (estimated)." % displayBytes(size)) available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable if size > available: logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) raise IOError("Media does not contain enough capacity to store image.") self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) else: if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") imagePath = encodePath(imagePath) self._writeImage(newDisc, imagePath, None) ################################################################## # Utility methods for dealing with growisofs and dvd+rw-mediainfo ################################################################## def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None): """ Writes an image to disc using either an entries list or an ISO image on disk. Callers are assumed to have done validation on paths, etc. before calling this method. @param newDisc: Indicates whether the disc should be re-initialized @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @raise IOError: If the media could not be written to for some reason. """ command = resolveCommand(GROWISOFS_COMMAND) args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() @staticmethod def _getEstimatedImageSize(entries): """ Gets the estimated size of a set of image entries. This is implemented in terms of the C{IsoImage} class. The returned value is calculated by adding a "fudge factor" to the value from C{IsoImage}. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances. @param entries: Dictionary mapping path to graft point. @return: Total estimated size of image, in bytes. @raise ValueError: If there are no entries in the dictionary @raise ValueError: If any path in the dictionary does not exist @raise IOError: If there is a problem calling C{mkisofs}. """ fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation if len(entries.keys()) == 0: raise ValueError("Must add at least one entry with addImageEntry().") image = IsoImage() for path in entries.keys(): image.addEntry(path, entries[path], override=False, contentsOnly=True) estimatedSize = image.getEstimatedSize() + fudgeFactor return estimatedSize def _retrieveSectorsUsed(self): """ Retrieves the number of sectors used on the current media. This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later. Once growisofs has been run, then we call C{_parseSectorsUsed} to parse the output and calculate the number of sectors used on the media. @return: Number of sectors used on the media """ tempdir = tempfile.mkdtemp() try: entries = { tempdir: None } args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) command = resolveCommand(GROWISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: logger.debug("Error (%d) calling growisofs to read sectors used." % result) logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 sectorsUsed = DvdWriter._parseSectorsUsed(output) logger.debug("Determined sectors used as %s" % sectorsUsed) return sectorsUsed finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass @staticmethod def _parseSectorsUsed(output): """ Parse sectors used information out of C{growisofs} output. The first line of a growisofs run looks something like this:: Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used. If the seek line cannot be found in the output, then sectors used of zero is assumed. @return: Sectors used on the media, as a floating point number. @raise ValueError: If the output cannot be parsed properly. """ if output is not None: pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") for line in output: match = pattern.search(line) if match is not None: try: return float(match.group(4).strip()) * 16.0 except ValueError: raise ValueError("Unable to parse sectors used out of growisofs output.") logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 @staticmethod def _searchForOverburn(output): """ Search for an "overburn" error message in C{growisofs} output. The C{growisofs} command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition. The error message looks like this:: :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! This method looks for the overburn error message anywhere in the output. If a matching error message is found, an C{IOError} exception is raised containing relevant information about the problem. Otherwise, the method call returns normally. @param output: List of output lines to search, as from C{executeCommand} @raise IOError: If an overburn condition is found. """ if output is None: return pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") for line in output: match = pattern.search(line) if match is not None: try: available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) except ValueError: logger.error("Image does not fit in available capacity (no useful capacity info available).") raise IOError("Media does not contain enough capacity to store image.") @staticmethod def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False): """ Builds a list of arguments to be passed to a C{growisofs} command. The arguments will either cause C{growisofs} to write the indicated image file to disc, or will pass C{growisofs} a list of directories or files that should be written to disc. If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is not C{None}. @param newDisc: Indicates whether the disc should be re-initialized @param hardwareId: Hardware id for the device @param driveSpeed: Speed at which the drive writes. @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @param mediaLabel: Media label to set on the image, if any @param dryRun: Says whether to make this a dry run (for checking capacity) @note: If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created. @note: We always pass the undocumented option C{-use-the-force-like=tty} to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :( @return: List suitable for passing to L{util.executeCommand} as C{args}. @raise ValueError: If caller does not pass one or the other of imagePath or entries. """ args = [] if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): raise ValueError("Must use either imagePath or entries.") args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron if dryRun: args.append("-dry-run") if driveSpeed is not None: args.append("-speed=%d" % driveSpeed) if newDisc: args.append("-Z") else: args.append("-M") if imagePath is not None: args.append("%s=%s" % (hardwareId, imagePath)) else: args.append(hardwareId) if mediaLabel is not None: args.append("-V") args.append(mediaLabel) args.append("-r") # Rock Ridge extensions with sane ownership and permissions args.append("-graft-points") keys = entries.keys() keys.sort() # just so we get consistent results for key in keys: # Same syntax as when calling mkisofs in IsoImage if entries[key] is None: args.append(key) else: args.append("%s/=%s" % (entries[key].strip("/"), key)) return args CedarBackup2-2.22.0/CedarBackup2/release.py0000664000175000017500000000231612143054156021770 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: release.py 1044 2013-05-10 02:16:12Z pronovic $ # Purpose : Provides location to maintain release information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Provides location to maintain version information. @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL @var AUTHOR: Author of software. @var EMAIL: Email address of author. @var COPYRIGHT: Copyright date. @var VERSION: Software version. @var DATE: Software release date. @var URL: URL of Cedar Backup webpage. @author: Kenneth J. Pronovici """ AUTHOR = "Kenneth J. Pronovici" EMAIL = "pronovic@ieee.org" COPYRIGHT = "2004-2011,2013" VERSION = "2.22.0" DATE = "09 May 2013" URL = "http://cedar-backup.sourceforge.net/" CedarBackup2-2.22.0/CedarBackup2/knapsack.py0000664000175000017500000003214711415165677022164 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: knapsack.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides knapsack algorithms used for "fit" decisions # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## """ Provides the implementation for various knapsack algorithms. Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files. All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently. Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible. @sort: firstFit, bestFit, worstFit, alternateFit @author: Kenneth J. Pronovici """ ####################################################################### # Public functions ####################################################################### ###################### # firstFit() function ###################### def firstFit(items, capacity): """ Implements the first-fit knapsack algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Search the list as it stands (arbitrary order) used = 0 remaining = capacity for key in items.keys(): if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (included.keys(), used) ##################### # bestFit() function ##################### def bestFit(items, capacity): """ Implements the best-fit knapsack algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from largest to smallest itemlist = items.items() itemlist.sort(lambda x, y: cmp(y[1][1], x[1][1])) # sort descending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return the results return (included.keys(), used) ###################### # worstFit() function ###################### def worstFit(items, capacity): """ Implements the worst-fit knapsack algorithm. The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = items.items() itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (included.keys(), used) ########################## # alternateFit() function ########################## def alternateFit(items, capacity): """ Implements the alternate-fit knapsack algorithm. This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = items.items() itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity front = keys[0:len(keys)/2] back = keys[len(keys)/2:len(keys)] back.reverse() i = 0 j = 0 while remaining > 0 and (i < len(front) or j < len(back)): if i < len(front): if remaining - items[front[i]][1] >= 0: included[front[i]] = None used += items[front[i]][1] remaining -= items[front[i]][1] i += 1 if j < len(back): if remaining - items[back[j]][1] >= 0: included[back[j]] = None used += items[back[j]][1] remaining -= items[back[j]][1] j += 1 # Return results return (included.keys(), used) CedarBackup2-2.22.0/CedarBackup2/util.py0000664000175000017500000022127212143053373021331 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # Portions copyright (c) 2001, 2002 Python Software Foundation. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: util.py 1042 2013-05-10 02:10:00Z pronovic $ # Purpose : Provides general-purpose utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general-purpose utilities. @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, RegexList, _Vertex, DirectedGraph, PathResolverSingleton, sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, deriveDayOfWeek, isStartOfWeek, buildNormalizedPath, ISO_SECTOR_SIZE, BYTES_PER_SECTOR, BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE, SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). @var SECONDS_PER_MINUTE: Number of seconds per minute. @var MINUTES_PER_HOUR: Number of minutes per hour. @var HOURS_PER_DAY: Number of hours per day. @var SECONDS_PER_DAY: Number of seconds per day. @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import math import os import re import time import logging import string # pylint: disable=W0402 from subprocess import Popen, STDOUT, PIPE from CedarBackup2.release import VERSION, DATE try: import pwd import grp _UID_GID_AVAILABLE = True except ImportError: _UID_GID_AVAILABLE = False ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.util") outputLogger = logging.getLogger("CedarBackup2.output") ISO_SECTOR_SIZE = 2048.0 # in bytes BYTES_PER_SECTOR = ISO_SECTOR_SIZE BYTES_PER_KBYTE = 1024.0 KBYTES_PER_MBYTE = 1024.0 MBYTES_PER_GBYTE = 1024.0 BYTES_PER_MBYTE = BYTES_PER_KBYTE * KBYTES_PER_MBYTE BYTES_PER_GBYTE = BYTES_PER_MBYTE * MBYTES_PER_GBYTE SECONDS_PER_MINUTE = 60.0 MINUTES_PER_HOUR = 60.0 HOURS_PER_DAY = 24.0 SECONDS_PER_DAY = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY UNIT_BYTES = 0 UNIT_KBYTES = 1 UNIT_MBYTES = 2 UNIT_GBYTES = 4 UNIT_SECTORS = 3 MTAB_FILE = "/etc/mtab" MOUNT_COMMAND = [ "mount", ] UMOUNT_COMMAND = [ "umount", ] DEFAULT_LANGUAGE = "C" LANG_VAR = "LANG" LOCALE_VARS = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", "LC_CTYPE", "LC_IDENTIFICATION", "LC_MEASUREMENT", "LC_MESSAGES", "LC_MONETARY", "LC_NAME", "LC_NUMERIC", "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] ######################################################################## # UnorderedList class definition ######################################################################## class UnorderedList(list): """ Class representing an "unordered list". An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list. For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order. I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future. We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, C{__le__} and C{__lt__} list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation I{but instead comparing sorted lists}. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists. """ def __eq__(self, other): """ Definition of C{==} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self == other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__eq__(otherSorted) def __ne__(self, other): """ Definition of C{!=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self != other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__ne__(otherSorted) def __ge__(self, other): """ Definition of S{>=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self >= other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__ge__(otherSorted) def __gt__(self, other): """ Definition of C{>} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self > other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__gt__(otherSorted) def __le__(self, other): """ Definition of S{<=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self <= other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__le__(otherSorted) def __lt__(self, other): """ Definition of C{<} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self < other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__lt__(otherSorted) ######################################################################## # AbsolutePathList class definition ######################################################################## class AbsolutePathList(UnorderedList): """ Class representing a list of absolute paths. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is an absolute path. Each item added to the list is encoded using L{encodePath}. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.append(self, encodePath(item)) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.insert(self, index, encodePath(item)) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) for item in seq: list.append(self, encodePath(item)) ######################################################################## # ObjectTypeList class definition ######################################################################## class ObjectTypeList(UnorderedList): """ Class representing a list containing only objects with a certain type. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in C{isinstance}, which should allow subclasses of of the requested type to be added to the list as well. The C{objectName} value will be used in exceptions, i.e. C{"Item must be a CollectDir object."} if C{objectName} is C{"CollectDir"}. """ def __init__(self, objectType, objectName): """ Initializes a typed list for a particular type. @param objectType: Type that the list elements must match. @param objectName: Short string containing the "name" of the type. """ super(ObjectTypeList, self).__init__() self.objectType = objectType self.objectName = objectName def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ for item in seq: if not isinstance(item, self.objectType): raise ValueError("All items must be %s objects." % self.objectName) list.extend(self, seq) ######################################################################## # RestrictedContentList class definition ######################################################################## class RestrictedContentList(UnorderedList): """ Class representing a list containing only object with certain values. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values. The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. @note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems. """ def __init__(self, valuesList, valuesDescr, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesList: List of valid values. @param valuesDescr: Short string describing list of values. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RestrictedContentList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesList = valuesList self.valuesDescr = valuesDescr def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ for item in seq: if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.extend(self, seq) ######################################################################## # RegexMatchList class definition ######################################################################## class RegexMatchList(UnorderedList): """ Class representing a list containing only strings that match a regular expression. If C{emptyAllowed} is passed in as C{False}, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (C{None} values are always disallowed, since string operations are not permitted on C{None}.) This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the indicated regular expression. @note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result. """ def __init__(self, valuesRegex, emptyAllowed=True, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesRegex: Regular expression that must be matched, as a string @param emptyAllowed: Indicates whether empty or None values are allowed. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RegexMatchList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesRegex = valuesRegex self.emptyAllowed = emptyAllowed self.pattern = re.compile(self.valuesRegex) def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid [%s]" % (self.prefix, item)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is None @raise ValueError: If any item is empty and empty values are not allowed @raise ValueError: If any item does not match the configured regular expression """ for item in seq: if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty.", self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.extend(self, seq) ######################################################################## # RegexList class definition ######################################################################## class RegexList(UnorderedList): """ Class representing a list of valid regular expression strings. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is a valid regular expression. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) for item in seq: list.append(self, item) ######################################################################## # Directed graph implementation ######################################################################## class _Vertex(object): """ Represents a vertex (or node) in a directed graph. """ def __init__(self, name): """ Constructor. @param name: Name of this graph vertex. @type name: String value. """ self.name = name self.endpoints = [] self.state = None class DirectedGraph(object): """ Represents a directed graph. A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set B{E} of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} object provides a way to construct a directed graph and execute a depth- first search. This data structure was designed based on the graphing chapter in U{The Algorithm Design Manual}, by Steven S. Skiena. This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them. """ _UNDISCOVERED = 0 _DISCOVERED = 1 _EXPLORED = 2 def __init__(self, name): """ Directed graph constructor. @param name: Name of this graph. @type name: String value. """ if name is None or name == "": raise ValueError("Graph name must be non-empty.") self._name = name self._vertices = {} self._startVertex = _Vertex(None) # start vertex is only vertex with no name def __repr__(self): """ Official string representation for class instance. """ return "DirectedGraph(%s)" % self.name def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ # pylint: disable=W0212 if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self._vertices != other._vertices: if self._vertices < other._vertices: return -1 else: return 1 return 0 def _getName(self): """ Property target used to get the graph name. """ return self._name name = property(_getName, None, None, "Name of the graph.") def createVertex(self, name): """ Creates a named vertex. @param name: vertex name @raise ValueError: If the vertex name is C{None} or empty. """ if name is None or name == "": raise ValueError("Vertex name must be non-empty.") vertex = _Vertex(name) self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once self._vertices[name] = vertex def createEdge(self, start, finish): """ Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. @param start: Name of start vertex. @param finish: Name of finish vertex. @raise ValueError: If one of the named vertices is unknown. """ try: startVertex = self._vertices[start] finishVertex = self._vertices[finish] startVertex.endpoints.append(finishVertex) except KeyError, e: raise ValueError("Vertex [%s] could not be found." % e) def topologicalSort(self): """ Implements a topological sort of the graph. This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort. A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort. Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found. A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices. @note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will). @return: Ordering on the vertices so that all edges go from left to right. @raise ValueError: If a cycle is found in the graph. """ ordering = [] for key in self._vertices: vertex = self._vertices[key] vertex.state = self._UNDISCOVERED for key in self._vertices: vertex = self._vertices[key] if vertex.state == self._UNDISCOVERED: self._topologicalSort(self._startVertex, ordering) return ordering def _topologicalSort(self, vertex, ordering): """ Recursive depth first search function implementing topological sort. @param vertex: Vertex to search @param ordering: List of vertices in proper order """ vertex.state = self._DISCOVERED for endpoint in vertex.endpoints: if endpoint.state == self._UNDISCOVERED: self._topologicalSort(endpoint, ordering) elif endpoint.state != self._EXPLORED: raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) if vertex.name is not None: ordering.insert(0, vertex.name) vertex.state = self._EXPLORED ######################################################################## # PathResolverSingleton class definition ######################################################################## class PathResolverSingleton(object): """ Singleton used for resolving executable paths. Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the C{mkisofs} executable, and the Subversion extension needs to find the C{svnlook} executable. Cedar Backup's original behavior was to assume that the simple name (C{"svnlook"} or whatever) was available on the caller's C{$PATH}, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like C{svnlook} in its path. One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the C{executeCommand()} function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's C{$PATH} on the application level if I don't have to). The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the C{lookup} method to try and resolve the executable they are looking for. Through the C{lookup} method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place. Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs. The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the L{fill} method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using L{getInstance} and will then just call the L{lookup} method. @cvar _instance: Holds a reference to the singleton @ivar _mapping: Internal mapping from resource name to path. """ _instance = None # Holds a reference to singleton instance class _Helper: """Helper class to provide a singleton factory method.""" def __init__(self): pass def __call__(self, *args, **kw): # pylint: disable=W0212,R0201 if PathResolverSingleton._instance is None: obj = PathResolverSingleton() PathResolverSingleton._instance = obj return PathResolverSingleton._instance getInstance = _Helper() # Method that callers will use to get an instance def __init__(self): """Singleton constructor, which just creates the singleton instance.""" if PathResolverSingleton._instance is not None: raise RuntimeError("Only one instance of PathResolverSingleton is allowed!") PathResolverSingleton._instance = self self._mapping = { } def lookup(self, name, default=None): """ Looks up name and returns the resolved path associated with the name. @param name: Name of the path resource to resolve. @param default: Default to return if resource cannot be resolved. @return: Resolved path associated with name, or default if name can't be resolved. """ value = default if name in self._mapping.keys(): value = self._mapping[name] logger.debug("Resolved command [%s] to [%s]." % (name, value)) return value def fill(self, mapping): """ Fills in the singleton's internal mapping from name to resource. @param mapping: Mapping from resource name to path. @type mapping: Dictionary mapping name to path, both as strings. """ self._mapping = { } for key in mapping.keys(): self._mapping[key] = mapping[key] ######################################################################## # Pipe class definition ######################################################################## class Pipe(Popen): """ Specialized pipe class for use by C{executeCommand}. The L{executeCommand} function needs a specialized way of interacting with a pipe. First, C{executeCommand} only reads from the pipe, and never writes to it. Second, C{executeCommand} needs a way to discard all output written to C{stderr}, as a means of simulating the shell C{2>/dev/null} construct. """ def __init__(self, cmd, bufsize=-1, ignoreStderr=False): stderr = STDOUT if ignoreStderr: devnull = nullDevice() stderr = os.open(devnull, os.O_RDWR) Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr) ######################################################################## # Diagnostics class definition ######################################################################## class Diagnostics(object): """ Class holding runtime diagnostic information. Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object. @sort: __init__, __repr__, __str__ """ # pylint: disable=R0201 def __init__(self): """ Constructor for the C{Diagnostics} class. """ def __repr__(self): """ Official string representation for class instance. """ return "Diagnostics()" def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def getValues(self): """ Get a map containing all of the diagnostic values. @return: Map from diagnostic name to diagnostic value. """ values = {} values['version'] = self.version values['interpreter'] = self.interpreter values['platform'] = self.platform values['encoding'] = self.encoding values['locale'] = self.locale values['timestamp'] = self.timestamp return values def printDiagnostics(self, fd=sys.stdout, prefix=""): """ Pretty-print diagnostic information to a file descriptor. @param fd: File descriptor used to print information. @param prefix: Prefix string (if any) to place onto printed lines @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ lines = self._buildDiagnosticLines(prefix) for line in lines: fd.write("%s\n" % line) def logDiagnostics(self, method, prefix=""): """ Pretty-print diagnostic information using a logger method. @param method: Logger method to use for logging (i.e. logger.info) @param prefix: Prefix string (if any) to place onto printed lines """ lines = self._buildDiagnosticLines(prefix) for line in lines: method("%s" % line) def _buildDiagnosticLines(self, prefix=""): """ Build a set of pretty-printed diagnostic lines. @param prefix: Prefix string (if any) to place onto printed lines @return: List of strings, not terminated by newlines. """ values = self.getValues() keys = values.keys() keys.sort() tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output lines = [] for key in keys: title = key.title() title += (tmax - len(title)) * '.' value = values[key] line = "%s%s: %s" % (prefix, title, value) lines.append(line) return lines @staticmethod def _getMaxLength(values): """ Get the maximum length from among a list of strings. """ tmax = 0 for value in values: if len(value) > tmax: tmax = len(value) return tmax def _getVersion(self): """ Property target to get the Cedar Backup version. """ return "Cedar Backup %s (%s)" % (VERSION, DATE) def _getInterpreter(self): """ Property target to get the Python interpreter version. """ version = sys.version_info return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3]) def _getEncoding(self): """ Property target to get the filesystem encoding. """ return sys.getfilesystemencoding() or sys.getdefaultencoding() def _getPlatform(self): """ Property target to get the operating system platform. """ try: if sys.platform.startswith("win"): windowsPlatforms = [ "Windows 3.1", "Windows 95/98/ME", "Windows NT/2000/XP", "Windows CE", ] wininfo = sys.getwindowsversion() # pylint: disable=E1101 winversion = "%d.%d.%d" % (wininfo[0], wininfo[1], wininfo[2]) winplatform = windowsPlatforms[wininfo[3]] wintext = wininfo[4] # i.e. "Service Pack 2" return "%s (%s %s %s)" % (sys.platform, winplatform, winversion, wintext) else: uname = os.uname() sysname = uname[0] # i.e. Linux release = uname[2] # i.e. 2.16.18-2 machine = uname[4] # i.e. i686 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) except: return sys.platform def _getLocale(self): """ Property target to get the default locale that is in effect. """ try: import locale return locale.getdefaultlocale()[0] except: return "(unknown)" def _getTimestamp(self): """ Property target to get a current date/time stamp. """ try: import datetime return datetime.datetime.utcnow().ctime() + " UTC" except: return "(unknown)" version = property(_getVersion, None, None, "Cedar Backup version.") interpreter = property(_getInterpreter, None, None, "Python interpreter version.") platform = property(_getPlatform, None, None, "Platform identifying information.") encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") locale = property(_getLocale, None, None, "Locale that is in effect.") timestamp = property(_getTimestamp, None, None, "Current timestamp.") ######################################################################## # General utility functions ######################################################################## ###################### # sortDict() function ###################### def sortDict(d): """ Returns the keys of the dictionary sorted by value. There are cuter ways to do this in Python 2.4, but we were originally attempting to stay compatible with Python 2.3. @param d: Dictionary to operate on @return: List of dictionary keys sorted in order by dictionary value. """ items = d.items() items.sort(lambda x, y: cmp(x[1], y[1])) return [key for key, value in items] ######################## # removeKeys() function ######################## def removeKeys(d, keys): """ Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary. @param d: Dictionary to operate on @param keys: List of keys to remove @raise KeyError: If one of the keys does not exist """ for key in keys: del d[key] ######################### # convertSize() function ######################### def convertSize(size, fromUnit, toUnit): """ Converts a size in one unit to a size in another unit. This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit. The available units are: - C{UNIT_BYTES} - Bytes - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B @param size: Size to convert @type size: Integer or float value in units of C{fromUnit} @param fromUnit: Unit to convert from @type fromUnit: One of the units listed above @param toUnit: Unit to convert to @type toUnit: One of the units listed above @return: Number converted to new unit, as a float. @raise ValueError: If one of the units is invalid. """ if size is None: raise ValueError("Cannot convert size of None.") if fromUnit == UNIT_BYTES: byteSize = float(size) elif fromUnit == UNIT_KBYTES: byteSize = float(size) * BYTES_PER_KBYTE elif fromUnit == UNIT_MBYTES: byteSize = float(size) * BYTES_PER_MBYTE elif fromUnit == UNIT_GBYTES: byteSize = float(size) * BYTES_PER_GBYTE elif fromUnit == UNIT_SECTORS: byteSize = float(size) * BYTES_PER_SECTOR else: raise ValueError("Unknown 'from' unit %s." % fromUnit) if toUnit == UNIT_BYTES: return byteSize elif toUnit == UNIT_KBYTES: return byteSize / BYTES_PER_KBYTE elif toUnit == UNIT_MBYTES: return byteSize / BYTES_PER_MBYTE elif toUnit == UNIT_GBYTES: return byteSize / BYTES_PER_GBYTE elif toUnit == UNIT_SECTORS: return byteSize / BYTES_PER_SECTOR else: raise ValueError("Unknown 'to' unit %s." % toUnit) ########################## # displayBytes() function ########################## def displayBytes(bytes, digits=2): # pylint: disable=W0622 """ Format a byte quantity so it can be sensibly displayed. It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:: print "Size: %s bytes" % bytes Call this function instead:: print "Size: %s" % displayBytes(bytes) What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard C{%f} string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.) @param bytes: Byte quantity. @type bytes: Integer number of bytes. @param digits: Number of digits to display after the decimal point. @type digits: Integer value, typically 2-5. @return: String, formatted for sensible display. """ if(bytes is None): raise ValueError("Cannot display byte value of None.") bytes = float(bytes) if math.fabs(bytes) < BYTES_PER_KBYTE: fmt = "%.0f bytes" value = bytes elif math.fabs(bytes) < BYTES_PER_MBYTE: fmt = "%." + "%d" % digits + "f kB" value = bytes / BYTES_PER_KBYTE elif math.fabs(bytes) < BYTES_PER_GBYTE: fmt = "%." + "%d" % digits + "f MB" value = bytes / BYTES_PER_MBYTE else: fmt = "%." + "%d" % digits + "f GB" value = bytes / BYTES_PER_GBYTE return fmt % value ################################## # getFunctionReference() function ################################## def getFunctionReference(module, function): """ Gets a reference to a named function. This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the C{os.path.isdir} function. You could use:: myfunc = getFunctionReference("os.path", "isdir") Although we won't bomb out directly, behavior is pretty much undefined if you pass in C{None} or C{""} for either C{module} or C{function}. The only validation we enforce is that whatever we get back must be callable. I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works. @param module: Name of module associated with function. @type module: Something like "os.path" or "CedarBackup2.util" @param function: Name of function @type function: Something like "isdir" or "getUidGid" @return: Reference to function associated with name. @raise ImportError: If the function cannot be found. @raise ValueError: If the resulting reference is not callable. @copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved. """ parts = [] if module is not None and module != "": parts = module.split(".") if function is not None and function != "": parts.append(function) copy = parts[:] while copy: try: module = __import__(string.join(copy, ".")) break except ImportError: del copy[-1] if not copy: raise parts = parts[1:] obj = module for part in parts: obj = getattr(obj, part) if not callable(obj): raise ValueError("Reference to %s.%s is not callable." % (module, function)) return obj ####################### # getUidGid() function ####################### def getUidGid(user, group): """ Get the uid/gid associated with a user/group pair This is a no-op if user/group functionality is not available on the platform. @param user: User name @type user: User name as a string @param group: Group name @type group: Group name as a string @return: Tuple C{(uid, gid)} matching passed-in user and group. @raise ValueError: If the ownership user/group values are invalid """ if _UID_GID_AVAILABLE: try: uid = pwd.getpwnam(user)[2] gid = grp.getgrnam(group)[2] return (uid, gid) except Exception, e: logger.debug("Error looking up uid and gid for [%s:%s]: %s" % (user, group, e)) raise ValueError("Unable to lookup up uid and gid for passed in user/group.") else: return (0, 0) ############################# # changeOwnership() function ############################# def changeOwnership(path, user, group): """ Changes ownership of path to match the user and group. This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is C{None}. Further, we won't even try to do it unless running as root, since it's unlikely to work. @param path: Path whose ownership to change. @param user: User which owns file. @param group: Group which owns file. """ if _UID_GID_AVAILABLE: if user is None or group is None: logger.debug("User or group is None, so not attempting to change owner on [%s]." % path) elif not isRunningAsRoot(): logger.debug("Not root, so not attempting to change owner on [%s]." % path) else: try: (uid, gid) = getUidGid(user, group) os.chown(path, uid, gid) except Exception, e: logger.error("Error changing ownership of [%s]: %s" % (path, e)) ############################# # isRunningAsRoot() function ############################# def isRunningAsRoot(): """ Indicates whether the program is running as the root user. """ return os.getuid() == 0 ############################## # splitCommandLine() function ############################## def splitCommandLine(commandLine): """ Splits a command line string into a list of arguments. Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (C{"}) for grouping arguments, not single quotes (C{'}). Make sure you take this into account when building your command line. Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use. @param commandLine: Command line string @type commandLine: String, i.e. "cback --verbose stage store" @return: List of arguments, suitable for passing to C{popen2}. @raise ValueError: If the command line is None. """ if commandLine is None: raise ValueError("Cannot split command line of None.") fields = re.findall('[^ "]+|"[^"]+"', commandLine) fields = map(lambda field: field.replace('"', ''), fields) return fields ############################ # resolveCommand() function ############################ def resolveCommand(command): """ Resolves the real path to a command through the path resolver mechanism. Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location. Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called L{PathResolverSingleton}) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command. The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like C{["svnlook", ]}). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well. If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative. @param command: Command to resolve. @type command: List form of command, i.e. C{["svnlook", ]}. @return: Path to command or just command itself if no mapping exists. """ singleton = PathResolverSingleton.getInstance() name = command[0] result = command[:] result[0] = singleton.lookup(name, name) return result ############################ # executeCommand() function ############################ def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None): """ Executes a shell command, hopefully in a safe way. This function exists to replace direct calls to C{os.popen} in the Cedar Backup code. It's not safe to call a function such as C{os.popen()} with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; echo"} in the current environment). Instead, it's safer to pass a list of arguments in the style supported bt C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. Under the normal case, this function will return a tuple of C{(status, None)} where the status is the wait-encoded return status of the call per the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as C{True}, the function will return a tuple of C{(status, output)} where C{output} is a list of strings, one entry per line in the output from the command. Output is always logged to the C{outputLogger.info()} target, regardless of whether it's returned. By default, C{stdout} and C{stderr} will be intermingled in the output. However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be included in the output. The C{doNotLog} parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to C{stdout}) then you might want to avoid putting all that information into the debug log. The C{outputFile} parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using C{outputFile.write()}. At the end, the file descriptor will be flushed using C{outputFile.flush()}. The caller maintains responsibility for closing the file object appropriately. @note: I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. C{"scp -B"}) separate from its arguments, even if they both end up looking kind of similar. @note: You cannot redirect output via shell constructs (i.e. C{>file}, C{2>/dev/null}, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using C{ignoreStderr} and C{outputFile}, as discussed above. @note: The operating system environment is partially sanitized before the command is invoked. See L{sanitizeEnvironment} for details. @param command: Shell command to execute @type command: List of individual arguments that make up the command @param args: List of arguments to the command @type args: List of additional arguments to the command @param returnOutput: Indicates whether to return the output of the command @type returnOutput: Boolean C{True} or C{False} @param ignoreStderr: Whether stderr should be discarded @type ignoreStderr: Boolean True or False @param doNotLog: Indicates that output should not be logged. @type doNotLog: Boolean C{True} or C{False} @param outputFile: File object that all output should be written to. @type outputFile: File object as returned from C{open()} or C{file()}. @return: Tuple of C{(result, output)} as described above. """ logger.debug("Executing command %s with args %s." % (command, args)) outputLogger.info("Executing command %s with args %s." % (command, args)) if doNotLog: logger.debug("Note: output will not be logged, per the doNotLog flag.") outputLogger.info("Note: output will not be logged, per the doNotLog flag.") output = [] fields = command[:] # make sure to copy it so we don't destroy it fields.extend(args) try: sanitizeEnvironment() # make sure we have a consistent environment try: pipe = Pipe(fields, ignoreStderr=ignoreStderr) except OSError: # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. # So, we attempt it a second time and if that works, we just go on as usual. # The problem appears to be that we sometimes get a bad stderr file descriptor. pipe = Pipe(fields, ignoreStderr=ignoreStderr) while True: line = pipe.stdout.readline() if not line: break if returnOutput: output.append(line) if outputFile is not None: outputFile.write(line) if not doNotLog: outputLogger.info(line[:-1]) # this way the log will (hopefully) get updated in realtime if outputFile is not None: try: # note, not every file-like object can be flushed outputFile.flush() except: pass if returnOutput: return (pipe.wait(), output) else: return (pipe.wait(), None) except OSError, e: try: if returnOutput: if output != []: return (pipe.wait(), output) else: return (pipe.wait(), [ e, ]) else: return (pipe.wait(), None) except UnboundLocalError: # pipe not set if returnOutput: return (256, []) else: return (256, None) ############################## # calculateFileAge() function ############################## def calculateFileAge(path): """ Calculates the age (in days) of a file. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem. @param path: Path to a file on disk. @return: Age of the file in days (possibly fractional). @raise OSError: If the file doesn't exist. """ currentTime = int(time.time()) fileStats = os.stat(path) lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" ageInSeconds = currentTime - lastUse ageInDays = ageInSeconds / SECONDS_PER_DAY return ageInDays ################### # mount() function ################### def mount(devicePath, mountPoint, fsType): """ Mounts the indicated device at the indicated mount point. For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any filesystem type that is supported by C{mount} on your platform. If the type is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may not work on all systems. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be mounted. @param mountPoint: Path that device should be mounted at. @param fsType: Type of the filesystem assumed to be available via the device. @raise IOError: If the device cannot be mounted. """ if fsType is None: args = [ devicePath, mountPoint ] else: args = [ "-t", fsType, devicePath, mountPoint ] command = resolveCommand(MOUNT_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] if result != 0: raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType)) ##################### # unmount() function ##################### def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0): """ Unmounts whatever device is mounted at the indicated mount point. Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the C{attempts} and C{waitSeconds} arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh). If the indicated mount point is not really a mount point per C{os.path.ismount()}, then it will be ignored. This seems to be a safer check then looking through C{/etc/mtab}, since C{ismount()} is already in the Python standard library and is documented as working on all POSIX systems. If C{removeAfter} is C{True}, then the mount point will be removed using C{os.rmdir()} after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param mountPoint: Mount point to be unmounted. @param removeAfter: Remove the mount point after unmounting it. @param attempts: Number of times to attempt the unmount. @param waitSeconds: Number of seconds to wait between repeated attempts. @raise IOError: If the mount point is still mounted after attempts are exhausted. """ if os.path.ismount(mountPoint): for attempt in range(0, attempts): logger.debug("Making attempt %d to unmount [%s]." % (attempt, mountPoint)) command = resolveCommand(UMOUNT_COMMAND) result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] if result != 0: logger.error("Error [%d] unmounting [%s] on attempt %d." % (result, mountPoint, attempt)) elif os.path.ismount(mountPoint): logger.error("After attempt %d, [%s] is still mounted." % (attempt, mountPoint)) else: logger.debug("Successfully unmounted [%s] on attempt %d." % (mountPoint, attempt)) break # this will cause us to skip the loop else: clause if attempt+1 < attempts: # i.e. this isn't the last attempt if waitSeconds > 0: logger.info("Sleeping %d second(s) before next unmount attempt." % waitSeconds) time.sleep(waitSeconds) else: if os.path.ismount(mountPoint): raise IOError("Unable to unmount [%s] after %d attempts." % (mountPoint, attempts)) logger.info("Mount point [%s] seems to have finally gone away." % mountPoint) if os.path.isdir(mountPoint) and removeAfter: logger.debug("Removing mount point [%s]." % mountPoint) os.rmdir(mountPoint) ########################### # deviceMounted() function ########################### def deviceMounted(devicePath): """ Indicates whether a specific filesystem device is currently mounted. We determine whether the device is mounted by looking through the system's C{mtab} file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the C{mtab} file exists and is readable. Otherwise, we assume that the device is not mounted. @note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be checked @return: True if device is mounted, false otherwise. """ if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): realPath = os.path.realpath(devicePath) lines = open(MTAB_FILE).readlines() for line in lines: (mountDevice, mountPoint, remainder) = line.split(None, 2) if mountDevice in [ devicePath, realPath, ]: logger.debug("Device [%s] is mounted at [%s]." % (devicePath, mountPoint)) return True return False ######################## # encodePath() function ######################## def encodePath(path): r""" Safely encodes a filesystem path. Many Python filesystem functions, such as C{os.listdir}, behave differently if they are passed unicode arguments versus simple string arguments. For instance, C{os.listdir} generally returns unicode path names if it is passed a unicode argument, and string pathnames if it is passed a string argument. However, this behavior often isn't as consistent as we might like. As an example, C{os.listdir} "gives up" if it finds a filename that it can't properly encode given the current locale settings. This means that the returned list is a mixed set of unicode and simple string paths. This has consequences later, because other filesystem functions like C{os.path.join} will blow up if they are given one string path and one unicode path. On comp.lang.python, Martin v. Lwis explained the C{os.listdir} behavior like this:: The operating system (POSIX) does not have the inherent notion that file names are character strings. Instead, in POSIX, file names are primarily byte strings. There are some bytes which are interpreted as characters (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from that, most OS layers think these are just bytes. Now, most *people* think that file names are character strings. To interpret a file name as a character string, you need to know what the encoding is to interpret the file names (which are byte strings) as character strings. There is, unfortunately, no operating system API to carry the notion of a file system encoding. By convention, the locale settings should be used to establish this encoding, in particular the LC_CTYPE facet of the locale. This is defined in the environment variables LC_CTYPE, LC_ALL, and LANG (searched in this order). If LANG is not set, the "C" locale is assumed, which uses ASCII as its file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a valid file name (at least it cannot be interpreted as characters, and hence not be converted to Unicode). Now, your Python script has requested that all file names *should* be returned as character (ie. Unicode) strings, but Python cannot comply, since there is no way to find out what this byte string means, in terms of characters. So we have three options: 1. Skip this string, only return the ones that can be converted to Unicode. Give the user the impression the file does not exist. 2. Return the string as a byte string 3. Refuse to listdir altogether, raising an exception (i.e. return nothing) Python has chosen alternative 2, allowing the application to implement 1 or 3 on top of that if it wants to (or come up with other strategies, such as user feedback). As a solution, he suggests that rather than passing unicode paths into the filesystem functions, that I should sensibly encode the path first. That is what this function accomplishes. Any function which takes a filesystem path as an argument should encode it first, before using it for any other purpose. I confess I still don't completely understand how this works. On a system with filesystem encoding "ISO-8859-1", a path C{u"\xe2\x99\xaa\xe2\x99\xac"} is converted into the string C{"\xe2\x99\xaa\xe2\x99\xac"}. However, on a system with a "utf-8" encoding, the result is a completely different string: C{"\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac"}. A quick test where I write to the first filename and open the second proves that the two strings represent the same file on disk, which is all I really care about. @note: As a special case, if C{path} is C{None}, then this function will return C{None}. @note: To provide several examples of encoding values, my Debian sarge box with an ext3 filesystem has Python filesystem encoding C{ISO-8859-1}. User Anarcat's Debian box with a xfs filesystem has filesystem encoding C{ANSI_X3.4-1968}. Both my iBook G4 running Mac OS X 10.4 and user Dag Rende's SuSE 9.3 box both have filesystem encoding C{UTF-8}. @note: Just because a filesystem has C{UTF-8} encoding doesn't mean that it will be able to handle all extended-character filenames. For instance, certain extended-character (but not UTF-8) filenames -- like the ones in the regression test tar file C{test/data/tree13.tar.gz} -- are not valid under Mac OS X, and it's not even possible to extract them from the tarfile on that platform. @param path: Path to encode @return: Path, as a string, encoded appropriately @raise ValueError: If the path cannot be encoded properly. """ if path is None: return path try: if isinstance(path, unicode): encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() path = path.encode(encoding) return path except UnicodeError: raise ValueError("Path could not be safely encoded as %s." % encoding) ######################## # nullDevice() function ######################## def nullDevice(): """ Attempts to portably return the null device on this system. The null device is something like C{/dev/null} on a UNIX system. The name varies on other platforms. """ return os.devnull ############################## # deriveDayOfWeek() function ############################## def deriveDayOfWeek(dayName): """ Converts English day name to numeric day of week as from C{time.localtime}. For instance, the day C{monday} would be converted to the number C{0}. @param dayName: Day of week to convert @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. """ if dayName.lower() == "monday": return 0 elif dayName.lower() == "tuesday": return 1 elif dayName.lower() == "wednesday": return 2 elif dayName.lower() == "thursday": return 3 elif dayName.lower() == "friday": return 4 elif dayName.lower() == "saturday": return 5 elif dayName.lower() == "sunday": return 6 else: return -1 # What else can we do?? Thrown an exception, I guess. ########################### # isStartOfWeek() function ########################### def isStartOfWeek(startingDay): """ Indicates whether "today" is the backup starting day per configuration. If the current day's English name matches the indicated starting day, then today is a starting day. @param startingDay: Configured starting day. @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. @return: Boolean indicating whether today is the starting day. """ value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) if value: logger.debug("Today is the start of the week.") else: logger.debug("Today is NOT the start of the week.") return value ################################# # buildNormalizedPath() function ################################# def buildNormalizedPath(path): """ Returns a "normalized" path based on a path name. A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading C{'.'} character (which would cause the file to be hidden in a file listing). Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path. To normalize a path, we begin by looking at the first character. If the first character is C{'/'} or C{'\\'}, it gets removed. If the first character is C{'.'}, it gets converted to C{'_'}. Then, we look through the rest of the path and convert all remaining C{'/'} or C{'\\'} characters C{'-'}, and all remaining whitespace characters to C{'_'}. As a special case, a path consisting only of a single C{'/'} or C{'\\'} character will be converted to C{'-'}. @param path: Path to normalize @return: Normalized path as described above. @raise ValueError: If the path is None """ if path is None: raise ValueError("Cannot normalize path None.") elif len(path) == 0: return path elif path == "/" or path == "\\": return "-" else: normalized = path normalized = re.sub(r"^\/", "", normalized) # remove leading '/' normalized = re.sub(r"^\\", "", normalized) # remove leading '\' normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' return normalized ################################# # sanitizeEnvironment() function ################################# def sanitizeEnvironment(): """ Sanitizes the operating system environment. The operating system environment is contained in C{os.environ}. This method sanitizes the contents of that dictionary. Currently, all it does is reset the locale (removing C{$LC_*}) and set the default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output. The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems. @return: Copy of the sanitized environment. """ for var in LOCALE_VARS: if os.environ.has_key(var): del os.environ[var] if os.environ.has_key(LANG_VAR): if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) os.environ[LANG_VAR] = DEFAULT_LANGUAGE return os.environ.copy() ############################# # dereferenceLink() function ############################# def dereferenceLink(path, absolute=True): """ Deference a soft link, optionally normalizing it to an absolute path. @param path: Path of link to dereference @param absolute: Whether to normalize the result to an absolute path @return: Dereferenced path, or original path if original is not a link. """ if os.path.islink(path): result = os.readlink(path) if absolute and not os.path.isabs(result): result = os.path.abspath(os.path.join(os.path.dirname(path), result)) return result return path ######################### # checkUnique() function ######################### def checkUnique(prefix, values): """ Checks that all values are unique. The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception. @param prefix: Prefix to use in the thrown exception @param values: List of values to check @raise ValueError: If there are duplicates in the list """ values.sort() duplicates = [] for i in range(1, len(values)): if values[i-1] == values[i]: duplicates.append(values[i]) if duplicates: raise ValueError("%s %s" % (prefix, duplicates)) ####################################### # parseCommaSeparatedString() function ####################################### def parseCommaSeparatedString(commaString): """ Parses a list of values out of a comma-separated string. The items in the list are split by comma, and then have whitespace stripped. As a special case, if C{commaString} is C{None}, then C{None} will be returned. @param commaString: List of values in comma-separated string format. @return: Values from commaString split into a list, or C{None}. """ if commaString is None: return None else: pass1 = commaString.split(",") pass2 = [] for item in pass1: item = item.strip() if len(item) > 0: pass2.append(item) return pass2 CedarBackup2-2.22.0/CedarBackup2/peer.py0000664000175000017500000015300311415165677021317 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: peer.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides backup peer-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides backup peer-related objects and utility functions. @sort: LocalPeer, RemotePeer @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import shutil # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot from CedarBackup2.util import splitCommandLine, encodePath from CedarBackup2.config import VALID_FAILURE_MODES ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.peer") DEF_RCP_COMMAND = [ "/usr/bin/scp", "-B", "-q", "-C" ] DEF_RSH_COMMAND = [ "/usr/bin/ssh", ] DEF_CBACK_COMMAND = "/usr/bin/cback" DEF_COLLECT_INDICATOR = "cback.collect" DEF_STAGE_INDICATOR = "cback.stage" SU_COMMAND = [ "su" ] ######################################################################## # LocalPeer class definition ######################################################################## class LocalPeer(object): ###################### # Class documentation ###################### """ Backup peer representing a local peer in a backup pool. This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory. The public methods other than the constructor are part of a "backup peer" interface shared with the C{RemotePeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, _copyLocalDir, _copyLocalFile, name, collectDir """ ############## # Constructor ############## def __init__(self, name, collectDir, ignoreFailureMode=None): """ Initializes a local backup peer. Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed. @param name: Name of the backup peer @type name: String, typically a hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute local path on disk @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If the name is empty. @raise ValueError: If collect directory is not an absolute path. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If a path cannot be encoded properly. """ if value is None or not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer.") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path." % targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk." % self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk." % targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back C{False}. If you need to, you can override the name of the collect indicator file by passing in a different name. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ collectIndicator = encodePath(collectIndicator) if collectIndicator is None: return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) else: return os.path.exists(os.path.join(self.collectDir, collectIndicator)) def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @param ownership: Owner and group that the indicator file should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the indicator file should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk." % self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if stageIndicator is None: fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: fileName = os.path.join(self.collectDir, stageIndicator) LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target ################## # Private methods ################## @staticmethod def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error copying the files. @raise OSError: If there is an OS error copying or changing permissions on a files """ filesCopied = 0 sourceDir = encodePath(sourceDir) targetDir = encodePath(targetDir) for fileName in os.listdir(sourceDir): sourceFile = os.path.join(sourceDir, fileName) targetFile = os.path.join(targetDir, fileName) LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) filesCopied += 1 return filesCopied @staticmethod def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True): """ Copies a source file to a target file. If the source file is C{None} then the target file will be created or overwritten as an empty file. If the target file is C{None}, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise ValueError: If the passed-in source file is not a regular file. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error copying or changing permissions on a file """ targetFile = encodePath(targetFile) sourceFile = encodePath(sourceFile) if targetFile is None: return if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if sourceFile is None: open(targetFile, "w").write("") else: if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): shutil.copy(sourceFile, targetFile) else: logger.debug("Source [%s] is not a regular file." % sourceFile) raise ValueError("Source is not a regular file.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) ######################################################################## # RemotePeer class definition ######################################################################## class RemotePeer(object): ###################### # Class documentation ###################### """ Backup peer representing a remote peer in a backup pool. This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command). You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will C{su} to the local user and execute the remote copies as that user. The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method. The public methods other than the constructor are part of a "backup peer" interface shared with the C{LocalPeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, executeRemoteCommand, executeManagedAction, _getDirContents, _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, remoteUser, rcpCommand, rshCommand, cbackCommand """ ############## # Constructor ############## def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None): """ Initializes a remote backup peer. @note: If provided, each command will eventually be parsed into a list of strings suitable for passing to C{util.executeCommand} in order to avoid security holes related to shell interpolation. This parsing will be done by the L{util.splitCommandLine} function. See the documentation for that function for some important notes about its limitations. @param name: Name of the backup peer @type name: String, must be a valid DNS hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute path on the remote peer @param workingDir: Working directory that can be used to create temporary files, etc. @type workingDir: String representing an absolute path on the current host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via remote shell to the peer @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param cbackCommand: A chack-compatible command to use for executing managed actions @type cbackCommand: String representing a system command including required arguments @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If collect directory is not an absolute path """ self._name = None self._collectDir = None self._workingDir = None self._remoteUser = None self._localUser = None self._rcpCommand = None self._rcpCommandList = None self._rshCommand = None self._rshCommandList = None self._cbackCommand = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.workingDir = workingDir self.remoteUser = remoteUser self.localUser = localUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path and cannot be C{None}. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setLocalUser(self, value): """ Property target used to set the local user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Peer local user must be a non-empty string.") self._localUser = value def _getLocalUser(self): """ Property target used to get the local user. """ return self._localUser def _setRcpCommand(self, value): """ Property target to set the rcp command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RCP_COMMAND}). Internally, we should always use C{self._rcpCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rcpCommand = None self._rcpCommandList = DEF_RCP_COMMAND else: if len(value) >= 1: self._rcpCommand = value self._rcpCommandList = splitCommandLine(self._rcpCommand) else: raise ValueError("The rcp command must be a non-empty string.") def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target to set the rsh command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RSH_COMMAND}). Internally, we should always use C{self._rshCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rshCommand = None self._rshCommandList = DEF_RSH_COMMAND else: if len(value) >= 1: self._rshCommand = value self._rshCommandList = splitCommandLine(self._rshCommand) else: raise ValueError("The rsh command must be a non-empty string.") def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target to set the cback command. The value must be a non-empty string or C{None}. Unlike the other command, this value is only stored in the "raw" form provided by the client. @raise ValueError: If the value is an empty string. """ if value is None: self._cbackCommand = None else: if len(value) >= 1: self._cbackCommand = value else: raise ValueError("The cback command must be a non-empty string.") def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path." % targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk." % targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return C{False} as if the file weren't there. If you need to, you can override the name of the collect indicator file by passing in a different name. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ try: if collectIndicator is None: sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) else: collectIndicator = encodePath(collectIndicator) sourceFile = os.path.join(self.collectDir, collectIndicator) targetFile = os.path.join(self.workingDir, collectIndicator) logger.debug("Fetch remote [%s] into [%s]." % (sourceFile, targetFile)) if os.path.exists(targetFile): try: os.remove(targetFile) except: raise Exception("Error: collect indicator [%s] already exists!" % targetFile) try: RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile, overwrite=False) if os.path.exists(targetFile): return True else: return False except Exception, e: logger.info("Failed looking for collect indicator: %s" % e) return False finally: if os.path.exists(targetFile): try: os.remove(targetFile) except: pass def writeStageIndicator(self, stageIndicator=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if stageIndicator is None: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, stageIndicator) try: if not os.path.exists(sourceFile): open(sourceFile, "w").write("") RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile) finally: if os.path.exists(sourceFile): try: os.remove(sourceFile) except: pass def executeRemoteCommand(self, command): """ Executes a command on the peer via remote shell. @param command: Command to execute @type command: String command-line suitable for use with rsh. @raise IOError: If there is an error executing the command on the remote peer. """ RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, self.name, self._rshCommand, self._rshCommandList, command) def executeManagedAction(self, action, fullBackup): """ Executes a managed action on this peer. @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @raise IOError: If there is an error executing the action on the remote peer. """ try: command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) self.executeRemoteCommand(command) except IOError, e: logger.info(e) raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name)) ################## # Private methods ################## @staticmethod def _getDirContents(path): """ Returns the contents of a directory in terms of a Set. The directory's contents are read as a L{FilesystemList} containing only files, and then the list is converted into a set object for later use. @param path: Directory path to get contents for @type path: String representing a path on disk @return: Set of files in the directory @raise ValueError: If path is not a directory or does not exist. """ contents = FilesystemList() contents.excludeDirs = True contents.excludeLinks = True contents.addDirContents(path) try: return set(contents) except: import sets return sets.Set(contents) @staticmethod def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} if we don't copy any files from the remote host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise IOError: If there is an IO error copying the files. """ beforeSet = RemotePeer._getDirContents(targetDir) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) else: copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetDir])[0] if result != 0: raise IOError("Error (%d) copying files from remote host." % result) afterSet = RemotePeer._getDirContents(targetDir) if len(afterSet) == 0: raise IOError("Did not copy any files from remote peer.") differenceSet = afterSet.difference(beforeSet) # files we added as part of copy if len(differenceSet) == 0: raise IOError("Apparently did not copy any new files from remote peer.") for targetFile in differenceSet: if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) return len(differenceSet) @staticmethod def _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True): """ Copies a remote source file to a target file. @note: Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} the target file does not exist when we're done. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) else: copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetFile])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) if not os.path.exists(targetFile): raise IOError("Apparently unable to copy file from remote host.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) @staticmethod def _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True): """ Copies a local source file to a remote host. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) else: copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile)) @staticmethod def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand): """ Executes a command on the peer via remote shell. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid on the remote host @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer @type rshCommandList: Command as a list to be passed to L{util.executeCommand} @param remoteCommand: The command to be executed on the remote host @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) @raise IOError: If there is an error executing the remote command """ actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote shell as another user.") except AttributeError: pass command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) else: command = resolveCommand(rshCommandList) result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] if result != 0: raise IOError("Command failed [%s]" % (actualCommand)) @staticmethod def _buildCbackCommand(cbackCommand, action, fullBackup): """ Builds a Cedar Backup command line for the named action. @note: If the cback command is None, then DEF_CBACK_COMMAND is used. @param cbackCommand: cback command to execute, including required options @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. @raise ValueError: If action is None. """ if action is None: raise ValueError("Action cannot be None.") if cbackCommand is None: cbackCommand = DEF_CBACK_COMMAND if fullBackup: return "%s --full %s" % (cbackCommand, action) else: return "%s %s" % (cbackCommand, action) CedarBackup2-2.22.0/CedarBackup2/xmlutil.py0000664000175000017500000006116311645150366022061 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2010 Kenneth J. Pronovici. # All rights reserved. # # Portions Copyright (c) 2000 Fourthought Inc, USA. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: xmlutil.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides general XML-related functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general XML-related functionality. What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup. @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren, readFirstChild, readStringList, readString, readInteger, readBoolean, addContainerNode, addStringNode, addIntegerNode, addBooleanNode, TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. @author: Kenneth J. Pronovici """ # pylint: disable=C0111,C0103,W0511,W0104 ######################################################################## # Imported modules ######################################################################## # System modules import sys import re import logging import codecs from types import UnicodeType from StringIO import StringIO # XML-related modules from xml.parsers.expat import ExpatError from xml.dom.minidom import Node from xml.dom.minidom import getDOMImplementation from xml.dom.minidom import parseString ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.xml") TRUE_BOOLEAN_VALUES = [ "Y", "y", ] FALSE_BOOLEAN_VALUES = [ "N", "n", ] VALID_BOOLEAN_VALUES = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES ######################################################################## # Functions for creating and parsing DOM trees ######################################################################## def createInputDom(xmlData, name="cb_config"): """ Creates a DOM tree based on reading an XML string. @param name: Assumed base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the parsed document @raise ValueError: If the document can't be parsed. """ try: xmlDom = parseString(xmlData) parentNode = readFirstChild(xmlDom, name) return (xmlDom, parentNode) except (IOError, ExpatError), e: raise ValueError("Unable to parse XML document: %s" % e) def createOutputDom(name="cb_config"): """ Creates a DOM tree used for writing an XML document. @param name: Base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the new document """ impl = getDOMImplementation() xmlDom = impl.createDocument(None, name, None) return (xmlDom, xmlDom.documentElement) ######################################################################## # Functions for reading values out of XML documents ######################################################################## def isElement(node): """ Returns True or False depending on whether the XML node is an element node. """ return node.nodeType == Node.ELEMENT_NODE def readChildren(parent, name): """ Returns a list of nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. Underneath, we use the Python C{getElementsByTagName} method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose C{parentNode} matches the passed-in parent. @param parent: Parent node to search beneath. @param name: Name of nodes to search for. @return: List of child nodes with correct parent, or an empty list if no matching nodes are found. """ lst = [] if parent is not None: result = parent.getElementsByTagName(name) for entry in result: if entry.parentNode is parent: lst.append(entry) return lst def readFirstChild(parent, name): """ Returns the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: First properly-named child of parent, or C{None} if no matching nodes are found. """ result = readChildren(parent, name) if result is None or result == []: return None return result[0] def readStringList(parent, name): """ Returns a list of the string contents associated with nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. First, we find all of the nodes using L{readChildren}, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. Nodes which have no C{TEXT_NODE} children are not represented in the returned list. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: List of strings as described above, or C{None} if no matching nodes are found. """ lst = [] result = readChildren(parent, name) for entry in result: if entry.hasChildNodes(): for child in entry.childNodes: if child.nodeType == Node.TEXT_NODE: lst.append(child.nodeValue) break if lst == []: lst = None return lst def readString(parent, name): """ Returns string contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: String contents of node or C{None} if no matching nodes are found. """ result = readStringList(parent, name) if result is None: return None return result[0] def readInteger(parent, name): """ Returns integer contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Integer contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to an integer. """ result = readString(parent, name) if result is None: return None else: return int(result) def readFloat(parent, name): """ Returns float contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Float contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a float value. """ result = readString(parent, name) if result is None: return None else: return float(result) def readBoolean(parent, name): """ Returns boolean contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Boolean contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a boolean. """ result = readString(parent, name) if result is None: return None else: if result in TRUE_BOOLEAN_VALUES: return True elif result in FALSE_BOOLEAN_VALUES: return False else: raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES) ######################################################################## # Functions for writing values into XML documents ######################################################################## def addContainerNode(xmlDom, parentNode, nodeName): """ Adds a container node as the next child of a parent node. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @return: Reference to the newly-created node. """ containerNode = xmlDom.createElement(nodeName) parentNode.appendChild(containerNode) return containerNode def addStringNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a string. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ containerNode = addContainerNode(xmlDom, parentNode, nodeName) if nodeValue is not None: textNode = xmlDom.createTextNode(nodeValue) containerNode.appendChild(textNode) return containerNode def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain an integer. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The integer will be converted to a string using "%d". The result will be added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a boolean. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). Boolean C{True}, or anything else interpreted as C{True} by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: if nodeValue: return addStringNode(xmlDom, parentNode, nodeName, "Y") else: return addStringNode(xmlDom, parentNode, nodeName, "N") ######################################################################## # Functions for serializing DOM trees ######################################################################## def serializeDom(xmlDom, indent=3): """ Serializes a DOM tree and returns the result in a string. @param xmlDom: XML DOM tree to serialize @param indent: Number of spaces to indent, as an integer @return: String form of DOM tree, pretty-printed. """ xmlBuffer = StringIO() serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) serializer.serialize(xmlDom) xmlData = xmlBuffer.getvalue() xmlBuffer.close() return xmlData class Serializer(object): """ XML serializer class. This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more. This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call C{visit()}. Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3): """ Initialize a serializer. @param stream: Stream to write output to. @param encoding: Output encoding. @param indent: Number of spaces to indent, as an integer """ self.stream = stream self.encoding = encoding self._indent = indent * " " self._depth = 0 self._inText = 0 def serialize(self, xmlDom): """ Serialize the passed-in XML document. @param xmlDom: XML DOM tree to serialize @raise ValueError: If there's an unknown node type in the document. """ self._visit(xmlDom) self.stream.write("\n") def _write(self, text): obj = _encodeText(text, self.encoding) self.stream.write(obj) return def _tryIndent(self): if not self._inText and self._indent: self._write('\n' + self._indent*self._depth) return def _visit(self, node): """ @raise ValueError: If there's an unknown node type in the document. """ if node.nodeType == Node.ELEMENT_NODE: return self._visitElement(node) elif node.nodeType == Node.ATTRIBUTE_NODE: return self._visitAttr(node) elif node.nodeType == Node.TEXT_NODE: return self._visitText(node) elif node.nodeType == Node.CDATA_SECTION_NODE: return self._visitCDATASection(node) elif node.nodeType == Node.ENTITY_REFERENCE_NODE: return self._visitEntityReference(node) elif node.nodeType == Node.ENTITY_NODE: return self._visitEntity(node) elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: return self._visitProcessingInstruction(node) elif node.nodeType == Node.COMMENT_NODE: return self._visitComment(node) elif node.nodeType == Node.DOCUMENT_NODE: return self._visitDocument(node) elif node.nodeType == Node.DOCUMENT_TYPE_NODE: return self._visitDocumentType(node) elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: return self._visitDocumentFragment(node) elif node.nodeType == Node.NOTATION_NODE: return self._visitNotation(node) # It has a node type, but we don't know how to handle it raise ValueError("Unknown node type: %s" % repr(node)) def _visitNodeList(self, node, exclude=None): for curr in node: curr is not exclude and self._visit(curr) return def _visitNamedNodeMap(self, node): for item in node.values(): self._visit(item) return def _visitAttr(self, node): self._write(' ' + node.name) value = node.value text = _translateCDATA(value, self.encoding) text, delimiter = _translateCDATAAttr(text) self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) return def _visitProlog(self): self._write("" % (self.encoding or 'utf-8')) self._inText = 0 return def _visitDocument(self, node): self._visitProlog() node.doctype and self._visitDocumentType(node.doctype) self._visitNodeList(node.childNodes, exclude=node.doctype) return def _visitDocumentFragment(self, node): self._visitNodeList(node.childNodes) return def _visitElement(self, node): self._tryIndent() self._write('<%s' % node.tagName) for attr in node.attributes.values(): self._visitAttr(attr) if len(node.childNodes): self._write('>') self._depth = self._depth + 1 self._visitNodeList(node.childNodes) self._depth = self._depth - 1 not (self._inText) and self._tryIndent() self._write('' % node.tagName) else: self._write('/>') self._inText = 0 return def _visitText(self, node): text = node.data if self._indent: text.strip() if text: text = _translateCDATA(text, self.encoding) self.stream.write(text) self._inText = 1 return def _visitDocumentType(self, doctype): if not doctype.systemId and not doctype.publicId: return self._tryIndent() self._write(' | | | # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] public = "'%s'" % doctype.publicId else: public = '"%s"' % doctype.publicId if doctype.publicId and doctype.systemId: self._write(' PUBLIC %s %s' % (public, system)) elif doctype.systemId: self._write(' SYSTEM %s' % system) if doctype.entities or doctype.notations: self._write(' [') self._depth = self._depth + 1 self._visitNamedNodeMap(doctype.entities) self._visitNamedNodeMap(doctype.notations) self._depth = self._depth - 1 self._tryIndent() self._write(']>') else: self._write('>') self._inText = 0 return def _visitEntity(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitNotation(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitCDATASection(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitComment(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitEntityReference(self, node): self._write('&%s;' % node.nodeName) self._inText = 1 return def _visitProcessingInstruction(self, node): self._tryIndent() self._write('' % (node.target, node.data)) self._inText = 0 return def _encodeText(text, encoding): """ @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was attributed to Martin v. Lwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ encoder = codecs.lookup(encoding)[0] # encode,decode,reader,writer if type(text) is not UnicodeType: text = unicode(text, "utf-8") return encoder(text)[0] # result,size def _translateCDATAAttr(characters): """ Handles normalization and some intelligence about quoting. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ if not characters: return '', "'" if "'" in characters: delimiter = '"' new_chars = re.sub('"', '"', characters) else: delimiter = "'" new_chars = re.sub("'", ''', characters) #FIXME: There's more to normalization #Convert attribute new-lines to character entity # characters is possibly shorter than new_chars (no entities) if "\n" in characters: new_chars = re.sub('\n', ' ', new_chars) return new_chars, delimiter #Note: Unicode object only for now def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0): """ @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') CHAR_TO_ENTITY = { '&': '&', '<': '<', ']]>': ']]>', } ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) if not characters: return '' if not markupSafe: if CDATA_CHAR_PATTERN.search(characters): new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] else: new_string = characters if prev_chars[-2:] == ']]' and characters[0] == '>': new_string = '>' + new_string[1:] else: new_string = characters #Note: use decimal char entity rep because some browsers are broken #FIXME: This will bomb for high characters. Should, for instance, detect #The UTF-8 for 0xFFFE and put out ￾ if XML_ILLEGAL_CHAR_PATTERN.search(new_string): new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] new_string = _encodeText(new_string, encoding) return new_string CedarBackup2-2.22.0/CedarBackup2/extend/0002775000175000017500000000000012143054371021264 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/CedarBackup2/extend/mbox.py0000664000175000017500000015356611415165677022636 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: mbox.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides an extension to back up mbox email files. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up mbox email files. Backing up email ================ Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.) One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little. What is this extension? ======================= This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the C{grepmail} utility. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import datetime import pickle import tempfile from bz2 import BZ2File from gzip import GzipFile # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList, BackupFileList from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup2.util import isStartOfWeek, buildNormalizedPath from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.mbox") GREPMAIL_COMMAND = [ "grepmail", ] REVISION_PATH_EXTENSION = "mboxlast" ######################################################################## # MboxFile class definition ######################################################################## class MboxFile(object): """ Class representing mbox file configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None): """ Constructor for the C{MboxFile} class. You should never directly instantiate this class. @param absolutePath: Absolute path to an mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._absolutePath = None self._collectMode = None self._compressMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.") ######################################################################## # MboxDir class definition ######################################################################## class MboxDir(object): """ Class representing mbox directory configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{MboxDir} class. You should never directly instantiate this class. @param absolutePath: Absolute path to a mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._absolutePath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # MboxConfig class definition ######################################################################## class MboxConfig(object): """ Class representing mbox configuration. Mbox configuration is used for backing up mbox email files. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The C{mboxFiles} list must be a list of C{MboxFile} objects - The C{mboxDirs} list must be a list of C{MboxDir} objects For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is of the proper type. Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, mboxFiles, mboxDirs """ def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None): """ Constructor for the C{MboxConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param mboxFiles: List of mbox files to back up @param mboxDirs: List of mbox directories to back up @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._mboxFiles = None self._mboxDirs = None self.collectMode = collectMode self.compressMode = compressMode self.mboxFiles = mboxFiles self.mboxDirs = mboxDirs def __repr__(self): """ Official string representation for class instance. """ return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.mboxFiles != other.mboxFiles: if self.mboxFiles < other.mboxFiles: return -1 else: return 1 if self.mboxDirs != other.mboxDirs: if self.mboxDirs < other.mboxDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setMboxFiles(self, value): """ Property target used to set the mboxFiles list. Either the value must be C{None} or each element must be an C{MboxFile}. @raise ValueError: If the value is not an C{MboxFile} """ if value is None: self._mboxFiles = None else: try: saved = self._mboxFiles self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") self._mboxFiles.extend(value) except Exception, e: self._mboxFiles = saved raise e def _getMboxFiles(self): """ Property target used to get the mboxFiles list. """ return self._mboxFiles def _setMboxDirs(self, value): """ Property target used to set the mboxDirs list. Either the value must be C{None} or each element must be an C{MboxDir}. @raise ValueError: If the value is not an C{MboxDir} """ if value is None: self._mboxDirs = None else: try: saved = self._mboxDirs self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") self._mboxDirs.extend(value) except Exception, e: self._mboxDirs = saved raise e def _getMboxDirs(self): """ Property target used to get the mboxDirs list. """ return self._mboxDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, mbox, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mbox = None self.mbox = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mbox) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mbox != other.mbox: if self.mbox < other.mbox: return -1 else: return 1 return 0 def _setMbox(self, value): """ Property target used to set the mbox configuration value. If not C{None}, the value must be a C{MboxConfig} object. @raise ValueError: If the value is not a C{MboxConfig} """ if value is None: self._mbox = None else: if not isinstance(value, MboxConfig): raise ValueError("Value must be a C{MboxConfig} object.") self._mbox = value def _getMbox(self): """ Property target used to get the mbox configuration value. """ return self._mbox mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") def validate(self): """ Validates configuration represented by the object. Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent C{MboxConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.mbox is None: raise ValueError("Mbox section is required.") if ((self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1)): raise ValueError("At least one mbox file or directory must be configured.") if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: if mboxFile.absolutePath is None: raise ValueError("Each mbox file must set an absolute path.") if self.mbox.collectMode is None and mboxFile.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") if self.mbox.compressMode is None and mboxFile.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: if mboxDir.absolutePath is None: raise ValueError("Each mbox directory must set an absolute path.") if self.mbox.collectMode is None and mboxDir.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") if self.mbox.compressMode is None and mboxDir.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/mbox/collectMode compressMode //cb_config/mbox/compressMode We also add groups of the following items, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files and mbox directories are added by L{_addMboxFile} and L{_addMboxDir}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mbox is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mbox") addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mbox configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mbox = LocalConfig._parseMbox(parentNode) @staticmethod def _parseMbox(parent): """ Parses an mbox configuration section. We read the following individual fields:: collectMode //cb_config/mbox/collect_mode compressMode //cb_config/mbox/compress_mode We also read groups of the following item, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files are parsed by L{_parseMboxFiles} and the mbox directories are parsed by L{_parseMboxDirs}. @param parent: Parent node to search beneath. @return: C{MboxConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mbox = None section = readFirstChild(parent, "mbox") if section is not None: mbox = MboxConfig() mbox.collectMode = readString(section, "collect_mode") mbox.compressMode = readString(section, "compress_mode") mbox.mboxFiles = LocalConfig._parseMboxFiles(section) mbox.mboxDirs = LocalConfig._parseMboxDirs(section) return mbox @staticmethod def _parseMboxFiles(parent): """ Reads a list of C{MboxFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode @param parent: Parent node to search beneath. @return: List of C{MboxFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "file"): if isElement(entry): mboxFile = MboxFile() mboxFile.absolutePath = readString(entry, "abs_path") mboxFile.collectMode = readString(entry, "collect_mode") mboxFile.compressMode = readString(entry, "compress_mode") lst.append(mboxFile) if lst == []: lst = None return lst @staticmethod def _parseMboxDirs(parent): """ Reads a list of C{MboxDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parent: Parent node to search beneath. @return: List of C{MboxDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "dir"): if isElement(entry): mboxDir = MboxDir() mboxDir.absolutePath = readString(entry, "abs_path") mboxDir.collectMode = readString(entry, "collect_mode") mboxDir.compressMode = readString(entry, "compress_mode") (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(mboxDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addMboxFile(xmlDom, parentNode, mboxFile): """ Adds an mbox file container as the next child of a parent. We add the following fields to the document:: absolutePath file/abs_path collectMode file/collect_mode compressMode file/compress_mode The node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the C{MboxConfig} object. If C{mboxFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxFile: MboxFile to be added to the document. """ if mboxFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode) @staticmethod def _addMboxDir(xmlDom, parentNode, mboxDir): """ Adds an mbox directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode compressMode dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the C{MboxConfig} object. If C{mboxDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxDir: MboxDir to be added to the document. """ if mboxDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if mboxDir.excludePatterns is not None: for pattern in mboxDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the mbox backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing mbox extended action.") newRevision = datetime.datetime.today() # mark here so all actions are after this date/time if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]" % fullBackup) if local.mbox.mboxFiles is not None: for mboxFile in local.mbox.mboxFiles: logger.debug("Working with mbox file [%s]" % mboxFile.absolutePath) collectMode = _getCollectMode(local, mboxFile) compressMode = _getCompressMode(local, mboxFile) lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox file meets criteria to be backed up today.") _backupMboxFile(config, mboxFile.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision) else: logger.debug("Mbox file will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxFile, newRevision) if local.mbox.mboxDirs is not None: for mboxDir in local.mbox.mboxDirs: logger.debug("Working with mbox directory [%s]" % mboxDir.absolutePath) collectMode = _getCollectMode(local, mboxDir) compressMode = _getCompressMode(local, mboxDir) lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) (excludePaths, excludePatterns) = _getExclusions(mboxDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox directory meets criteria to be backed up today.") _backupMboxDir(config, mboxDir.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns) else: logger.debug("Mbox directory will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxDir, newRevision) logger.info("Executed the mbox extended action successfully.") def _getCollectMode(local, item): """ Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Collect mode to use. """ if item.collectMode is None: collectMode = local.mbox.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]" % collectMode) return collectMode def _getCompressMode(local, item): """ Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Compress mode to use. """ if item.compressMode is None: compressMode = local.mbox.compressMode else: compressMode = item.compressMode logger.debug("Compress mode is [%s]" % compressMode) return compressMode def _getRevisionPath(config, item): """ Gets the path to the revision file associated with a repository. @param config: Cedar Backup configuration. @param item: Mbox file or directory @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(item.absolutePath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]" % revisionPath) return revisionPath def _loadLastRevision(config, item, fullBackup, collectMode): """ Loads the last revision date for this item from disk and returns it. If this is a full backup, or if the revision file cannot be loaded for some reason, then C{None} is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param fullBackup: Indicates whether this is a full backup @param collectMode: Indicates the collect mode for this item @return: Revision date as a datetime.datetime object or C{None}. """ revisionPath = _getRevisionPath(config, item) if fullBackup: revisionDate = None logger.debug("Revision file ignored because this is a full backup.") elif collectMode in ['weekly', 'daily']: revisionDate = None logger.debug("No revision file based on collect mode [%s]." % collectMode) else: logger.debug("Revision file will be used for non-full incremental backup.") if not os.path.isfile(revisionPath): revisionDate = None logger.debug("Revision file [%s] does not exist on disk." % revisionPath) else: try: revisionDate = pickle.load(open(revisionPath, "r")) logger.debug("Loaded revision file [%s] from disk: [%s]" % (revisionPath, revisionDate)) except: revisionDate = None logger.error("Failed loading revision file [%s] from disk." % revisionPath) return revisionDate def _writeNewRevision(config, item, newRevision): """ Writes new revision information to disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param newRevision: Revision date as a datetime.datetime object. """ revisionPath = _getRevisionPath(config, item) try: pickle.dump(newRevision, open(revisionPath, "w")) changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: [%s]" % (revisionPath, newRevision)) except: logger.error("Failed to write revision file [%s] to disk." % revisionPath) def _getExclusions(mboxDir): """ Gets exclusions (file and patterns) associated with an mbox directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns. @param mboxDir: Mbox directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: paths.append(os.path.join(mboxDir.absolutePath, relativePath)) patterns = [] if mboxDir.excludePatterns is not None: patterns.extend(mboxDir.excludePatterns) logger.debug("Exclude paths: %s" % paths) logger.debug("Exclude patterns: %s" % patterns) return(paths, patterns) def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None): """ Gets the backup file path (including correct extension) associated with an mbox path. We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file. @note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @param targetDir: Target directory in which the path should exist @return: Absolute path to the backup file associated with the repository. """ if targetDir is None: normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s" % (revisionDate, normalizedPath) else: filename = os.path.basename(mboxPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename if targetDir is None: backupPath = os.path.join(config.collect.targetDir, filename) else: backupPath = os.path.join(targetDir, filename) logger.debug("Backup file path is [%s]" % backupPath) return backupPath def _getTarfilePath(config, mboxPath, compressMode, newRevision): """ Gets the tarfile backup file path (including correct extension) associated with an mbox path. Along with the path, the tar archive mode is returned in a form that can be used with L{BackupFileList.generateTarfile}. @note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @return: Tuple of (absolute path to tarfile, tar archive mode) """ normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename archiveMode = "targz" elif compressMode == 'bzip2': filename = "%s.bz2" % filename archiveMode = "tarbz2" else: archiveMode = "tar" tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]" % tarfilePath) return (tarfilePath, archiveMode) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving backup information. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object. """ if compressMode == "gzip": return GzipFile(backupPath, "w") elif compressMode == "bzip2": return BZ2File(backupPath, "w") else: return open(backupPath, "w") def _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None): """ Backs up an individual mbox file. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox file to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param targetDir: Target directory to write the backed-up file into @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) outputFile = _getOutputFile(backupPath, compressMode) if fullBackup or collectMode != "incr" or lastRevision is None: args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox else: revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] command = resolveCommand(GREPMAIL_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) logger.debug("Completed backing up mailbox [%s]." % absolutePath) return backupPath def _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns): """ Backs up a directory containing mbox files. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox directory to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ try: tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) mboxList = FilesystemList() mboxList.excludeDirs = True mboxList.excludePaths = excludePaths mboxList.excludePatterns = excludePatterns mboxList.addDirContents(absolutePath, recursive=False) tarList = BackupFileList() for item in mboxList: backupPath = _backupMboxFile(config, item, fullBackup, collectMode, "none", # no need to compress inside compressed tar lastRevision, newRevision, targetDir=tmpdir) tarList.addFile(backupPath) (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) logger.debug("Completed backing up directory [%s]." % absolutePath) finally: try: for item in tarList: if os.path.exists(item): try: os.remove(item) except: pass except: pass try: os.rmdir(tmpdir) except: pass CedarBackup2-2.22.0/CedarBackup2/extend/sysinfo.py0000664000175000017500000002164711415165677023355 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: sysinfo.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides an extension to save off important system recovery information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to save off important system recovery information. This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information: - Currently-installed Debian packages via C{dpkg --get-selections} - Disk partition information via C{fdisk -l} - System-wide mounted filesystem contents, via C{ls -laR} The saved-off information is placed into the collect directory and is compressed using C{bzip2} to save space. This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple. @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.sysinfo") DPKG_PATH = "/usr/bin/dpkg" FDISK_PATH = "/sbin/fdisk" DPKG_COMMAND = [ DPKG_PATH, "--get-selections", ] FDISK_COMMAND = [ FDISK_PATH, "-l", ] LS_COMMAND = [ "ls", "-laR", "/", ] ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the sysinfo backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If the backup process fails for some reason. """ logger.debug("Executing sysinfo extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) logger.info("Executed the sysinfo extended action successfully.") def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True): """ Dumps a list of currently installed Debian packages via C{dpkg}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(DPKG_PATH): logger.info("Not executing Debian package dump since %s doesn't seem to exist." % DPKG_PATH) elif not os.access(DPKG_PATH, os.X_OK): logger.info("Not executing Debian package dump since %s cannot be executed." % DPKG_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) try: command = resolveCommand(DPKG_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing Debian package dump." % result) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True): """ Dumps information about the partition table via C{fdisk}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(FDISK_PATH): logger.info("Not executing partition table dump since %s doesn't seem to exist." % FDISK_PATH) elif not os.access(FDISK_PATH, os.X_OK): logger.info("Not executing partition table dump since %s cannot be executed." % FDISK_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) try: command = resolveCommand(FDISK_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing partition table dump." % result) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True): """ Dumps complete listing of filesystem contents via C{ls -laR}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) try: # Note: can't count on return status from 'ls', so we don't check it. command = resolveCommand(LS_COMMAND) executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _getOutputFile(targetDir, name, compress=True): """ Opens the output file used for saving a dump to the filesystem. The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is C{True}), written in the target directory. @param targetDir: Target directory to write file in. @param name: Name of the file to create. @param compress: Indicates whether to write compressed output. @return: Tuple of (Output file object, filename) """ filename = os.path.join(targetDir, "%s.txt" % name) if compress: filename = "%s.bz2" % filename logger.debug("Dump file will be [%s]." % filename) if compress: outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") return (outputFile, filename) CedarBackup2-2.22.0/CedarBackup2/extend/subversion.py0000664000175000017500000016251611415165677024063 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: subversion.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides an extension to back up Subversion repositories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up Subversion repositories. This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using C{svnadmin dump} in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle from bz2 import BZ2File from gzip import GzipFile # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import UnorderedList, RegexList from CedarBackup2.util import isStartOfWeek, buildNormalizedPath from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.subversion") SVNLOOK_COMMAND = [ "svnlook", ] SVNADMIN_COMMAND = [ "svnadmin", ] REVISION_PATH_EXTENSION = "svnlast" ######################################################################## # RepositoryDir class definition ######################################################################## class RepositoryDir(object): """ Class representing Subversion repository directory. A repository directory is a directory that contains one or more Subversion repositories. The following restrictions exist on data in this class: - The directory path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, directoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{RepositoryDir} class. @param repositoryType: Type of repository, for reference @param directoryPath: Absolute path of the Subversion parent directory @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._repositoryType = None self._directoryPath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.repositoryType = repositoryType self.directoryPath = directoryPath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if self.repositoryType < other.repositoryType: return -1 else: return 1 if self.directoryPath != other.directoryPath: if self.directoryPath < other.directoryPath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setDirectoryPath(self, value): """ Property target used to set the directory path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._directoryPath = encodePath(value) def _getDirectoryPath(self): """ Property target used to get the repository path. """ return self._directoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # Repository class definition ######################################################################## class Repository(object): """ Class representing generic Subversion repository configuration.. The following restrictions exist on data in this class: - The respository path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. @sort: __init__, __repr__, __str__, __cmp__, repositoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{Repository} class. @param repositoryType: Type of repository, for reference @param repositoryPath: Absolute path to a Subversion repository on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._repositoryType = None self._repositoryPath = None self._collectMode = None self._compressMode = None self.repositoryType = repositoryType self.repositoryPath = repositoryPath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if self.repositoryType < other.repositoryType: return -1 else: return 1 if self.repositoryPath != other.repositoryPath: if self.repositoryPath < other.repositoryPath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setRepositoryPath(self, value): """ Property target used to set the repository path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._repositoryPath = encodePath(value) def _getRepositoryPath(self): """ Property target used to get the repository path. """ return self._repositoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") ######################################################################## # SubversionConfig class definition ######################################################################## class SubversionConfig(object): """ Class representing Subversion configuration. Subversion configuration is used for backing up Subversion repositories. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The repositories list must be a list of C{Repository} objects. - The repositoryDirs list must be a list of C{RepositoryDir} objects. For the two lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has the correct type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, repositories """ def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None): """ Constructor for the C{SubversionConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param repositories: List of Subversion repositories to back up. @param repositoryDirs: List of Subversion parent directories to back up. @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._repositories = None self._repositoryDirs = None self.collectMode = collectMode self.compressMode = compressMode self.repositories = repositories self.repositoryDirs = repositoryDirs def __repr__(self): """ Official string representation for class instance. """ return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.repositories != other.repositories: if self.repositories < other.repositories: return -1 else: return 1 if self.repositoryDirs != other.repositoryDirs: if self.repositoryDirs < other.repositoryDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRepositories(self, value): """ Property target used to set the repositories list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositories = None else: try: saved = self._repositories self._repositories = ObjectTypeList(Repository, "Repository") self._repositories.extend(value) except Exception, e: self._repositories = saved raise e def _getRepositories(self): """ Property target used to get the repositories list. """ return self._repositories def _setRepositoryDirs(self, value): """ Property target used to set the repositoryDirs list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositoryDirs = None else: try: saved = self._repositoryDirs self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") self._repositoryDirs.extend(value) except Exception, e: self._repositoryDirs = saved raise e def _getRepositoryDirs(self): """ Property target used to get the repositoryDirs list. """ return self._repositoryDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, subversion, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._subversion = None self.subversion = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.subversion) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.subversion != other.subversion: if self.subversion < other.subversion: return -1 else: return 1 return 0 def _setSubversion(self, value): """ Property target used to set the subversion configuration value. If not C{None}, the value must be a C{SubversionConfig} object. @raise ValueError: If the value is not a C{SubversionConfig} """ if value is None: self._subversion = None else: if not isinstance(value, SubversionConfig): raise ValueError("Value must be a C{SubversionConfig} object.") self._subversion = value def _getSubversion(self): """ Property target used to get the subversion configuration value. """ return self._subversion subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") def validate(self): """ Validates configuration represented by the object. Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent C{SubversionConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.subversion is None: raise ValueError("Subversion section is required.") if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): raise ValueError("At least one Subversion repository must be configured.") if self.subversion.repositories is not None: for repository in self.subversion.repositories: if repository.repositoryPath is None: raise ValueError("Each repository must set a repository path.") if self.subversion.collectMode is None and repository.collectMode is None: raise ValueError("Collect mode must either be set in parent section or individual repository.") if self.subversion.compressMode is None and repository.compressMode is None: raise ValueError("Compress mode must either be set in parent section or individual repository.") if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: if repositoryDir.directoryPath is None: raise ValueError("Each repository directory must set a directory path.") if self.subversion.collectMode is None and repositoryDir.collectMode is None: raise ValueError("Collect mode must either be set in parent section or repository directory.") if self.subversion.compressMode is None and repositoryDir.compressMode is None: raise ValueError("Compress mode must either be set in parent section or repository directory.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/subversion/collectMode compressMode //cb_config/subversion/compressMode We also add groups of the following items, one list element per item:: repository //cb_config/subversion/repository repository_dir //cb_config/subversion/repository_dir @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.subversion is not None: sectionNode = addContainerNode(xmlDom, parentNode, "subversion") addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) if self.subversion.repositories is not None: for repository in self.subversion.repositories: LocalConfig._addRepository(xmlDom, sectionNode, repository) if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the subversion configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._subversion = LocalConfig._parseSubversion(parentNode) @staticmethod def _parseSubversion(parent): """ Parses a subversion configuration section. We read the following individual fields:: collectMode //cb_config/subversion/collect_mode compressMode //cb_config/subversion/compress_mode We also read groups of the following item, one list element per item:: repositories //cb_config/subversion/repository repository_dirs //cb_config/subversion/repository_dir The repositories are parsed by L{_parseRepositories}, and the repository dirs are parsed by L{_parseRepositoryDirs}. @param parent: Parent node to search beneath. @return: C{SubversionConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ subversion = None section = readFirstChild(parent, "subversion") if section is not None: subversion = SubversionConfig() subversion.collectMode = readString(section, "collect_mode") subversion.compressMode = readString(section, "compress_mode") subversion.repositories = LocalConfig._parseRepositories(section) subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) return subversion @staticmethod def _parseRepositories(parent): """ Reads a list of C{Repository} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type repositoryPath abs_path collectMode collect_mode compressMode compess_mode The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{Repository} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository"): if isElement(entry): repository = Repository() repository.repositoryType = readString(entry, "type") repository.repositoryPath = readString(entry, "abs_path") repository.collectMode = readString(entry, "collect_mode") repository.compressMode = readString(entry, "compress_mode") lst.append(repository) if lst == []: lst = None return lst @staticmethod def _addRepository(xmlDom, parentNode, repository): """ Adds a repository container as the next child of a parent. We add the following fields to the document:: repositoryType repository/type repositoryPath repository/abs_path collectMode repository/collect_mode compressMode repository/compress_mode The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the C{SubversionConfig} object. If C{repository} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repository: Repository to be added to the document. """ if repository is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository") addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode) @staticmethod def _parseRepositoryDirs(parent): """ Reads a list of C{RepositoryDir} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type directoryPath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{RepositoryDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository_dir"): if isElement(entry): repositoryDir = RepositoryDir() repositoryDir.repositoryType = readString(entry, "type") repositoryDir.directoryPath = readString(entry, "abs_path") repositoryDir.collectMode = readString(entry, "collect_mode") repositoryDir.compressMode = readString(entry, "compress_mode") (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(repositoryDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addRepositoryDir(xmlDom, parentNode, repositoryDir): """ Adds a repository dir container as the next child of a parent. We add the following fields to the document:: repositoryType repository_dir/type directoryPath repository_dir/abs_path collectMode repository_dir/collect_mode compressMode repository_dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the C{SubversionConfig} object. If C{repositoryDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repositoryDir: Repository dir to be added to the document. """ if repositoryDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if repositoryDir.excludePatterns is not None: for pattern in repositoryDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the Subversion backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing Subversion extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]" % fullBackup) if local.subversion.repositories is not None: for repository in local.subversion.repositories: _backupRepository(config, local, todayIsStart, fullBackup, repository) if local.subversion.repositoryDirs is not None: for repositoryDir in local.subversion.repositoryDirs: logger.debug("Working with repository directory [%s]." % repositoryDir.directoryPath) for repositoryPath in _getRepositoryPaths(repositoryDir): repository = Repository(repositoryDir.repositoryType, repositoryPath, repositoryDir.collectMode, repositoryDir.compressMode) _backupRepository(config, local, todayIsStart, fullBackup, repository) logger.info("Completed backing up Subversion repository directory [%s]." % repositoryDir.directoryPath) logger.info("Executed the Subversion extended action successfully.") def _getCollectMode(local, repository): """ Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param repository: Repository object. @return: Collect mode to use. """ if repository.collectMode is None: collectMode = local.subversion.collectMode else: collectMode = repository.collectMode logger.debug("Collect mode is [%s]" % collectMode) return collectMode def _getCompressMode(local, repository): """ Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param local: LocalConfig object. @param repository: Repository object. @return: Compress mode to use. """ if repository.compressMode is None: compressMode = local.subversion.compressMode else: compressMode = repository.compressMode logger.debug("Compress mode is [%s]" % compressMode) return compressMode def _getRevisionPath(config, repository): """ Gets the path to the revision file associated with a repository. @param config: Config object. @param repository: Repository object. @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(repository.repositoryPath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]" % revisionPath) return revisionPath def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision): """ Gets the backup file path (including correct extension) associated with a repository. @param config: Config object. @param repositoryPath: Path to the indicated repository @param compressMode: Compress mode to use for this repository. @param startRevision: Starting repository revision. @param endRevision: Ending repository revision. @return: Absolute path to the backup file associated with the repository. """ normalizedPath = buildNormalizedPath(repositoryPath) filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename backupPath = os.path.join(config.collect.targetDir, filename) logger.debug("Backup file path is [%s]" % backupPath) return backupPath def _getRepositoryPaths(repositoryDir): """ Gets a list of child repository paths within a repository directory. @param repositoryDir: RepositoryDirectory """ (excludePaths, excludePatterns) = _getExclusions(repositoryDir) fsList = FilesystemList() fsList.excludeFiles = True fsList.excludeLinks = True fsList.excludePaths = excludePaths fsList.excludePatterns = excludePatterns fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) return fsList def _getExclusions(repositoryDir): """ Gets exclusions (file and patterns) associated with an repository directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns. @param repositoryDir: Repository directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) patterns = [] if repositoryDir.excludePatterns is not None: patterns.extend(repositoryDir.excludePatterns) logger.debug("Exclude paths: %s" % paths) logger.debug("Exclude patterns: %s" % patterns) return(paths, patterns) def _backupRepository(config, local, todayIsStart, fullBackup, repository): """ Backs up an individual Subversion repository. This internal method wraps the public methods and adds some functionality to work better with the extended action itself. @param config: Cedar Backup configuration. @param local: Local configuration @param todayIsStart: Indicates whether today is start of week @param fullBackup: Full backup flag @param repository: Repository to operate on @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ logger.debug("Working with repository [%s]" % repository.repositoryPath) logger.debug("Repository type is [%s]" % repository.repositoryType) collectMode = _getCollectMode(local, repository) compressMode = _getCompressMode(local, repository) revisionPath = _getRevisionPath(config, repository) if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): logger.debug("Repository will not be backed up, per collect mode.") return logger.debug("Repository meets criteria to be backed up today.") if collectMode != "incr" or fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) logger.debug("Using full backup, revision: (%d, %d)." % (startRevision, endRevision)) else: if fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) else: startRevision = _loadLastRevision(revisionPath) + 1 endRevision = getYoungestRevision(repository.repositoryPath) if startRevision > endRevision: logger.info("No need to back up repository [%s]; no new revisions." % repository.repositoryPath) return logger.debug("Using incremental backup, revision: (%d, %d)." % (startRevision, endRevision)) backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) outputFile = _getOutputFile(backupPath, compressMode) try: backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) finally: outputFile.close() if not os.path.exists(backupPath): raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) if collectMode == "incr": _writeLastRevision(config, revisionPath, endRevision) logger.info("Completed backing up Subversion repository [%s]." % repository.repositoryPath) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving the Subversion dump. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object. """ if compressMode == "gzip": return GzipFile(backupPath, "w") elif compressMode == "bzip2": return BZ2File(backupPath, "w") else: return open(backupPath, "w") def _loadLastRevision(revisionPath): """ Loads the indicated revision file from disk into an integer. If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups. @param revisionPath: Path to the revision file on disk. @return: Integer representing last backed-up revision, -1 on error or if none can be read. """ if not os.path.isfile(revisionPath): startRevision = -1 logger.debug("Revision file [%s] does not exist on disk." % revisionPath) else: try: startRevision = pickle.load(open(revisionPath, "r")) logger.debug("Loaded revision file [%s] from disk: %d." % (revisionPath, startRevision)) except: startRevision = -1 logger.error("Failed loading revision file [%s] from disk." % revisionPath) return startRevision def _writeLastRevision(config, revisionPath, endRevision): """ Writes the end revision to the indicated revision file on disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param revisionPath: Path to the revision file on disk. @param endRevision: Last revision backed up on this run. """ try: pickle.dump(endRevision, open(revisionPath, "w")) changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: %d." % (revisionPath, endRevision)) except: logger.error("Failed to write revision file [%s] to disk." % revisionPath) ############################## # backupRepository() function ############################## def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion repository. The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: This function should either be run as root or as the owner of the Subversion repository. @note: It is apparently I{not} a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using C{svnadmin recover}. @param repositoryPath: Path to Subversion repository to back up @type repositoryPath: String path representing Subversion repository on disk. @param backupFile: Python file object to use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param startRevision: Starting repository revision to back up (for incremental backups) @type startRevision: Integer value >= 0. @param endRevision: Ending repository revision to back up (for incremental backups) @type endRevision: Integer value >= 0. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ if startRevision is None: startRevision = 0 if endRevision is None: endRevision = getYoungestRevision(repositoryPath) if int(startRevision) < 0: raise ValueError("Start revision must be >= 0.") if int(endRevision) < 0: raise ValueError("End revision must be >= 0.") if startRevision > endRevision: raise ValueError("Start revision must be <= end revision.") args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] command = resolveCommand(SVNADMIN_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) logger.debug("Completed dumping subversion repository [%s]." % repositoryPath) ################################# # getYoungestRevision() function ################################# def getYoungestRevision(repositoryPath): """ Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. @note: This function should either be run as root or as the owner of the Subversion repository. @param repositoryPath: Path to Subversion repository to look in. @type repositoryPath: String path representing Subversion repository on disk. @return: Youngest revision as an integer. @raise ValueError: If there is a problem parsing the C{svnlook} output. @raise IOError: If there is a problem executing the C{svnlook} command. """ args = [ 'youngest', repositoryPath, ] command = resolveCommand(SVNLOOK_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) if len(output) != 1: raise ValueError("Unable to parse 'svnlook youngest' output.") return int(output[0]) ######################################################################## # Deprecated functionality ######################################################################## class BDBRepository(Repository): """ Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{BDBRepository} class. """ super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) class FSFSRepository(Repository): """ Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{FSFSRepository} class. """ super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion BDB repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion FSFS repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) CedarBackup2-2.22.0/CedarBackup2/extend/encrypt.py0000664000175000017500000004665611415165677023356 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: encrypt.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides an extension to encrypt staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to encrypt staging directories. When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the C{cback.encrypt} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import readFirstChild, readString from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.encrypt") GPG_COMMAND = [ "gpg", ] VALID_ENCRYPT_MODES = [ "gpg", ] ENCRYPT_INDICATOR = "cback.encrypt" ######################################################################## # EncryptConfig class definition ######################################################################## class EncryptConfig(object): """ Class representing encrypt configuration. Encrypt configuration is used for encrypting staging directories. The following restrictions exist on data in this class: - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} - The encrypt target value must be a non-empty string @sort: __init__, __repr__, __str__, __cmp__, encryptMode, encryptTarget """ def __init__(self, encryptMode=None, encryptTarget=None): """ Constructor for the C{EncryptConfig} class. @param encryptMode: Encryption mode @param encryptTarget: Encryption target (for instance, GPG recipient) @raise ValueError: If one of the values is invalid. """ self._encryptMode = None self._encryptTarget = None self.encryptMode = encryptMode self.encryptTarget = encryptTarget def __repr__(self): """ Official string representation for class instance. """ return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encryptMode != other.encryptMode: if self.encryptMode < other.encryptMode: return -1 else: return 1 if self.encryptTarget != other.encryptTarget: if self.encryptTarget < other.encryptTarget: return -1 else: return 1 return 0 def _setEncryptMode(self, value): """ Property target used to set the encrypt mode. If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ENCRYPT_MODES: raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) self._encryptMode = value def _getEncryptMode(self): """ Property target used to get the encrypt mode. """ return self._encryptMode def _setEncryptTarget(self, value): """ Property target used to set the encrypt target. """ if value is not None: if len(value) < 1: raise ValueError("Encrypt target must be non-empty string.") self._encryptTarget = value def _getEncryptTarget(self): """ Property target used to get the encrypt target. """ return self._encryptTarget encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, encrypt, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._encrypt = None self.encrypt = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.encrypt) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encrypt != other.encrypt: if self.encrypt < other.encrypt: return -1 else: return 1 return 0 def _setEncrypt(self, value): """ Property target used to set the encrypt configuration value. If not C{None}, the value must be a C{EncryptConfig} object. @raise ValueError: If the value is not a C{EncryptConfig} """ if value is None: self._encrypt = None else: if not isinstance(value, EncryptConfig): raise ValueError("Value must be a C{EncryptConfig} object.") self._encrypt = value def _getEncrypt(self): """ Property target used to get the encrypt configuration value. """ return self._encrypt encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") def validate(self): """ Validates configuration represented by the object. Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in. @raise ValueError: If one of the validations fails. """ if self.encrypt is None: raise ValueError("Encrypt section is required.") if self.encrypt.encryptMode is None: raise ValueError("Encrypt mode must be set.") if self.encrypt.encryptTarget is None: raise ValueError("Encrypt target must be set.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.encrypt is not None: sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the encrypt configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._encrypt = LocalConfig._parseEncrypt(parentNode) @staticmethod def _parseEncrypt(parent): """ Parses an encrypt configuration section. We read the following individual fields:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ encrypt = None section = readFirstChild(parent, "encrypt") if section is not None: encrypt = EncryptConfig() encrypt.encryptMode = readString(section, "encrypt_mode") encrypt.encryptTarget = readString(section, "encrypt_target") return encrypt ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the encrypt backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing encrypt extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.encrypt.encryptMode not in ["gpg", ]: raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) if local.encrypt.encryptMode == "gpg": _confirmGpgRecipient(local.encrypt.encryptTarget) dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) for dailyDir in dailyDirs: _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the encrypt extended action successfully.") ############################## # _encryptDailyDir() function ############################## def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup): """ Encrypts the contents of a daily staging directory. Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is C{"gpg"}. @param dailyDir: Daily directory to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin encrypting contents of [%s]." % dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) logger.debug("Completed encrypting contents of [%s]." % dailyDir) ########################## # _encryptFile() function ########################## def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False): """ Encrypts the source file using the indicated mode. The encrypted file will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully encrypted. Currently, only the C{"gpg"} encrypt mode is supported. @param sourcePath: Absolute path of the source file to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @return: Path to the newly-created encrypted file. @raise ValueError: If an invalid encrypt mode is passed in. @raise IOError: If there is a problem accessing, encrypting or removing the source file. """ if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) if encryptMode == 'gpg': encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) else: raise ValueError("Unknown encrypt mode [%s]" % encryptMode) changeOwnership(encryptedPath, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s]." % sourcePath) except: raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) return encryptedPath ################################# # _encryptFileWithGpg() function ################################# def _encryptFileWithGpg(sourcePath, recipient): """ Encrypts the indicated source file using GPG. The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a C{".gpg"} extension. The source file will not be modified or removed by this function call. @param sourcePath: Absolute path of file to be encrypted. @param recipient: Recipient name to be passed to GPG's C{"-r"} option @return: Path to the newly-created encrypted file. @raise IOError: If there is a problem encrypting the file. """ encryptedPath = "%s.gpg" % sourcePath command = resolveCommand(GPG_COMMAND) args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) if not os.path.exists(encryptedPath): raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) logger.debug("Completed encrypting file [%s] to [%s]." % (sourcePath, encryptedPath)) return encryptedPath ################################# # _confirmGpgRecpient() function ################################# def _confirmGpgRecipient(recipient): """ Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise. @param recipient: Recipient name @raise IOError: If the recipient's public key is not known to GPG. """ command = resolveCommand(GPG_COMMAND) args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed result = executeCommand(command, args)[0] if result != 0: raise IOError("GPG unable to find public key for [%s]." % recipient) CedarBackup2-2.22.0/CedarBackup2/extend/postgresql.py0000664000175000017500000005616611645150366024064 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # Copyright (c) 2006 Antoine Beaupre. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Antoine Beaupre # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: postgresql.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides an extension to back up PostgreSQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This file was created with a width of 132 characters, and NO tabs. ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up PostgreSQL databases. This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with the PostgreSQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the C{pg_dump} client. This can be accomplished using appropriate voodoo in the C{pg_hda.conf} file. Note that this code always produces a full backup. There is currently no facility for making incremental backups. You should always make C{/etc/cback.conf} unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames. Use of this extension I{may} expose usernames in the process listing (via C{ps}) when the backup is running if the username is specified in the configuration. @author: Kenneth J. Pronovici @author: Antoine Beaupre """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup2.config import VALID_COMPRESS_MODES from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.postgresql") POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] ######################################################################## # PostgresqlConfig class definition ######################################################################## class PostgresqlConfig(object): """ Class representing PostgreSQL configuration. The PostgreSQL configuration information is used for backing up PostgreSQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, user, all, databases """ def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{PostgresqlConfig} class. @param user: User to execute backup as. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._compressMode = None self._all = None self._databases = None self.user = user self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if self.user < other.user: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(basestring, "string") self._databases.extend(value) except Exception, e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, postgresql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._postgresql = None self.postgresql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.postgresql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.postgresql != other.postgresql: if self.postgresql < other.postgresql: return -1 else: return 1 return 0 def _setPostgresql(self, value): """ Property target used to set the postgresql configuration value. If not C{None}, the value must be a C{PostgresqlConfig} object. @raise ValueError: If the value is not a C{PostgresqlConfig} """ if value is None: self._postgresql = None else: if not isinstance(value, PostgresqlConfig): raise ValueError("Value must be a C{PostgresqlConfig} object.") self._postgresql = value def _getPostgresql(self): """ Property target used to get the postgresql configuration value. """ return self._postgresql postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.postgresql is None: raise ValueError("PostgreSQL section is required.") if self.postgresql.compressMode is None: raise ValueError("Compress mode value is required.") if self.postgresql.all: if self.postgresql.databases is not None and self.postgresql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.postgresql.databases is None or len(self.postgresql.databases) < 1: raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also add groups of the following items, one list element per item:: database //cb_config/postgresql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.postgresql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) if self.postgresql.databases is not None: for database in self.postgresql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the postgresql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._postgresql = LocalConfig._parsePostgresql(parentNode) @staticmethod def _parsePostgresql(parent): """ Parses a postgresql configuration section. We read the following fields:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also read groups of the following item, one list element per item:: databases //cb_config/postgresql/database @param parent: Parent node to search beneath. @return: C{PostgresqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ postgresql = None section = readFirstChild(parent, "postgresql") if section is not None: postgresql = PostgresqlConfig() postgresql.user = readString(section, "user") postgresql.compressMode = readString(section, "compress_mode") postgresql.all = readBoolean(section, "all") postgresql.databases = readStringList(section, "database") return postgresql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the PostgreSQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing PostgreSQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.postgresql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, None) if local.postgresql.databases is not None and local.postgresql.databases != []: logger.debug("Backing up %d individual databases." % len(local.postgresql.databases)) for database in local.postgresql.databases: logger.info("Backing up database [%s]." % database) _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the PostgreSQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None): """ Backs up an individual PostgreSQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database. @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) try: backupDatabase(user, outputFile, database) finally: outputFile.close() if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the PostgreSQL dump. The filename is either C{"postgresqldump.txt"} or C{"postgresqldump-.txt"}. The C{".gz"} or C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename) """ if database is None: filename = os.path.join(targetDir, "postgresqldump.txt") else: filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "w") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") logger.debug("PostgreSQL dump file will be [%s]." % filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, backupFile, database=None): """ Backs up an individual PostgreSQL database, or all databases. This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is I{always} a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: Typically, you would use the C{root} user to back up all databases. @param user: User to use for connecting to the database. @type user: String representing PostgreSQL username. @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ args = [] if user is not None: args.append('-U') args.append(user) if database is None: command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) else: command = resolveCommand(POSTGRESQLDUMP_COMMAND) args.append(database) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) else: raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database)) CedarBackup2-2.22.0/CedarBackup2/extend/__init__.py0000664000175000017500000000271411415155732023403 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Extensions This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.extend import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] CedarBackup2-2.22.0/CedarBackup2/extend/mysql.py0000664000175000017500000006327411645150366023024 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: mysql.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides an extension to back up MySQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up MySQL databases. This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{mysqldump} command included with the MySQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another. The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice. The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in C{/root/.my.cnf}:: [mysqldump] user = root password = Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback.conf} to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode C{0600}). @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup2.config import VALID_COMPRESS_MODES from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.mysql") MYSQLDUMP_COMMAND = [ "mysqldump", ] ######################################################################## # MysqlConfig class definition ######################################################################## class MysqlConfig(object): """ Class representing MySQL configuration. The MySQL configuration information is used for backing up MySQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, user, password, all, databases """ def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{MysqlConfig} class. @param user: User to execute backup as. @param password: Password associated with user. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._password = None self._compressMode = None self._all = None self._databases = None self.user = user self.password = password self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if self.user < other.user: return -1 else: return 1 if self.password != other.password: if self.password < other.password: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setPassword(self, value): """ Property target used to set the password value. """ if value is not None: if len(value) < 1: raise ValueError("Password must be non-empty string.") self._password = value def _getPassword(self): """ Property target used to get the password value. """ return self._password def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(basestring, "string") self._databases.extend(value) except Exception, e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") password = property(_getPassword, _setPassword, None, "Password associated with user.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, mysql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mysql = None self.mysql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mysql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mysql != other.mysql: if self.mysql < other.mysql: return -1 else: return 1 return 0 def _setMysql(self, value): """ Property target used to set the mysql configuration value. If not C{None}, the value must be a C{MysqlConfig} object. @raise ValueError: If the value is not a C{MysqlConfig} """ if value is None: self._mysql = None else: if not isinstance(value, MysqlConfig): raise ValueError("Value must be a C{MysqlConfig} object.") self._mysql = value def _getMysql(self): """ Property target used to get the mysql configuration value. """ return self._mysql mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.mysql is None: raise ValueError("Mysql section is required.") if self.mysql.compressMode is None: raise ValueError("Compress mode value is required.") if self.mysql.all: if self.mysql.databases is not None and self.mysql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.mysql.databases is None or len(self.mysql.databases) < 1: raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also add groups of the following items, one list element per item:: database //cb_config/mysql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mysql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mysql") addStringNode(xmlDom, sectionNode, "user", self.mysql.user) addStringNode(xmlDom, sectionNode, "password", self.mysql.password) addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) if self.mysql.databases is not None: for database in self.mysql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mysql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mysql = LocalConfig._parseMysql(parentNode) @staticmethod def _parseMysql(parentNode): """ Parses a mysql configuration section. We read the following fields:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also read groups of the following item, one list element per item:: databases //cb_config/mysql/database @param parentNode: Parent node to search beneath. @return: C{MysqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mysql = None section = readFirstChild(parentNode, "mysql") if section is not None: mysql = MysqlConfig() mysql.user = readString(section, "user") mysql.password = readString(section, "password") mysql.compressMode = readString(section, "compress_mode") mysql.all = readBoolean(section, "all") mysql.databases = readStringList(section, "database") return mysql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the MySQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing MySQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.mysql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, None) else: logger.debug("Backing up %d individual databases." % len(local.mysql.databases)) for database in local.mysql.databases: logger.info("Backing up database [%s]." % database) _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the MySQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None): """ Backs up an individual MySQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database (if any). @param password: Password associated with user (if any). @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) try: backupDatabase(user, password, outputFile, database) finally: outputFile.close() if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the MySQL dump. The filename is either C{"mysqldump.txt"} or C{"mysqldump-.txt"}. The C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename) """ if database is None: filename = os.path.join(targetDir, "mysqldump.txt") else: filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "w") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") logger.debug("MySQL dump file will be [%s]." % filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, password, backupFile, database=None): """ Backs up an individual MySQL database, or all databases. This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call I{always} results a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up. This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database username and its password:: [mysqldump] user = root password = If you are executing this function as some system user other than root, then the C{.my.cnf} file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. @param user: User to use for connecting to the database (if any) @type user: String representing MySQL username, or C{None} @param password: Password associated with user (if any) @type password: String representing MySQL password, or C{None} @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ args = [ "-all", "--flush-logs", "--opt", ] if user is not None: logger.warn("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") args.append("--user=%s" % user) if password is not None: logger.warn("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") args.append("--password=%s" % password) if database is None: args.insert(0, "--all-databases") else: args.insert(0, "--databases") args.append(database) command = resolveCommand(MYSQLDUMP_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing MySQL database dump for all databases." % result) else: raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database)) CedarBackup2-2.22.0/CedarBackup2/extend/capacity.py0000664000175000017500000004662411415165677023462 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: capacity.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Provides an extension to check remaining media capacity. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to check remaining media capacity. Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.util import displayBytes from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import readFirstChild, readString from CedarBackup2.actions.util import createWriter, checkMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.capacity") ######################################################################## # Percentage class definition ######################################################################## class PercentageQuantity(object): """ Class representing a percentage quantity. The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context. @sort: __init__, __repr__, __str__, __cmp__, quantity """ def __init__(self, quantity=None): """ Constructor for the C{PercentageQuantity} class. @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") @raise ValueError: If the quantity value is invaid. """ self._quantity = None self.quantity = quantity def __repr__(self): """ Official string representation for class instance. """ return "PercentageQuantity(%s)" % (self.quantity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.quantity != other.quantity: if self.quantity < other.quantity: return -1 else: return 1 return 0 def _setQuantity(self, value): """ Property target used to set the quantity The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Percentage must be a non-empty string.") floatValue = float(value) if floatValue < 0.0 or floatValue > 100.0: raise ValueError("Percentage must be a positive value from 0.0 to 100.0") self._quantity = value # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _getPercentage(self): """ Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None: return float(self.quantity) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.") ######################################################################## # CapacityConfig class definition ######################################################################## class CapacityConfig(object): """ Class representing capacity configuration. The following restrictions exist on data in this class: - The maximum percentage utilized must be a PercentageQuantity - The minimum bytes remaining must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, maxPercentage, minBytes """ def __init__(self, maxPercentage=None, minBytes=None): """ Constructor for the C{CapacityConfig} class. @param maxPercentage: Maximum percentage of the media that may be utilized @param minBytes: Minimum number of free bytes that must be available """ self._maxPercentage = None self._minBytes = None self.maxPercentage = maxPercentage self.minBytes = minBytes def __repr__(self): """ Official string representation for class instance. """ return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.maxPercentage != other.maxPercentage: if self.maxPercentage < other.maxPercentage: return -1 else: return 1 if self.minBytes != other.minBytes: if self.minBytes < other.minBytes: return -1 else: return 1 return 0 def _setMaxPercentage(self, value): """ Property target used to set the maxPercentage value. If not C{None}, the value must be a C{PercentageQuantity} object. @raise ValueError: If the value is not a C{PercentageQuantity} """ if value is None: self._maxPercentage = None else: if not isinstance(value, PercentageQuantity): raise ValueError("Value must be a C{PercentageQuantity} object.") self._maxPercentage = value def _getMaxPercentage(self): """ Property target used to get the maxPercentage value """ return self._maxPercentage def _setMinBytes(self, value): """ Property target used to set the bytes utilized value. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._minBytes = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._minBytes = value def _getMinBytes(self): """ Property target used to get the bytes remaining value. """ return self._minBytes maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, capacity, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._capacity = None self.capacity = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.capacity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.capacity != other.capacity: if self.capacity < other.capacity: return -1 else: return 1 return 0 def _setCapacity(self, value): """ Property target used to set the capacity configuration value. If not C{None}, the value must be a C{CapacityConfig} object. @raise ValueError: If the value is not a C{CapacityConfig} """ if value is None: self._capacity = None else: if not isinstance(value, CapacityConfig): raise ValueError("Value must be a C{CapacityConfig} object.") self._capacity = value def _getCapacity(self): """ Property target used to get the capacity configuration value. """ return self._capacity capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") def validate(self): """ Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both. @raise ValueError: If one of the validations fails. """ if self.capacity is None: raise ValueError("Capacity section is required.") if self.capacity.maxPercentage is None and self.capacity.minBytes is None: raise ValueError("Must provide either max percentage or min bytes.") if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: raise ValueError("Must provide either max percentage or min bytes, but not both.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.capacity is not None: sectionNode = addContainerNode(xmlDom, parentNode, "capacity") LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) if self.capacity.minBytes is not None: # because utility function fills in empty section on None addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the capacity configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._capacity = LocalConfig._parseCapacity(parentNode) @staticmethod def _parseCapacity(parentNode): """ Parses a capacity configuration section. We read the following fields:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param parentNode: Parent node to search beneath. @return: C{CapacityConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ capacity = None section = readFirstChild(parentNode, "capacity") if section is not None: capacity = CapacityConfig() capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") capacity.minBytes = readByteQuantity(section, "min_bytes") return capacity @staticmethod def _readPercentageQuantity(parent, name): """ Read a percentage quantity value from an XML document. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Percentage quantity parsed from XML document """ quantity = readString(parent, name) if quantity is None: return None return PercentageQuantity(quantity) @staticmethod def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity): """ Adds a text node as the next child of a parent, to contain a percentage quantity. If the C{percentageQuantity} is None, then no node will be created. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param percentageQuantity: PercentageQuantity object to put into the XML document @return: Reference to the newly-created node. """ if percentageQuantity is not None: addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the capacity action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing capacity extended action.") if config.options is None or config.store is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized capacity = createWriter(config).retrieveCapacity() logger.debug("Media capacity: %s" % capacity) if local.capacity.maxPercentage is not None: if capacity.utilized > local.capacity.maxPercentage.percentage: logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized" % (local.capacity.maxPercentage.quantity, capacity.utilized)) else: # if local.capacity.bytes is not None if capacity.bytesAvailable < local.capacity.minBytes.bytes: logger.error("Media has reached capacity limit of %s: only %s available" % (displayBytes(local.capacity.minBytes.bytes), displayBytes(capacity.bytesAvailable))) logger.info("Executed the capacity extended action successfully.") CedarBackup2-2.22.0/CedarBackup2/extend/split.py0000664000175000017500000004414112122615120022763 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2013 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Extensions # Revision : $Id: split.py 1028 2013-03-21 14:33:51Z pronovic $ # Purpose : Provides an extension to split up large files in staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to split up large files in staging directories. When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the C{cback.split} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership from CedarBackup2.xmlutil import createInputDom, addContainerNode from CedarBackup2.xmlutil import readFirstChild from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.split") SPLIT_COMMAND = [ "split", ] SPLIT_INDICATOR = "cback.split" ######################################################################## # SplitConfig class definition ######################################################################## class SplitConfig(object): """ Class representing split configuration. Split configuration is used for splitting staging directories. The following restrictions exist on data in this class: - The size limit must be a ByteQuantity - The split size must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, sizeLimit, splitSize """ def __init__(self, sizeLimit=None, splitSize=None): """ Constructor for the C{SplitCOnfig} class. @param sizeLimit: Size limit of the files, in bytes @param splitSize: Size that files exceeding the limit will be split into, in bytes @raise ValueError: If one of the values is invalid. """ self._sizeLimit = None self._splitSize = None self.sizeLimit = sizeLimit self.splitSize = splitSize def __repr__(self): """ Official string representation for class instance. """ return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sizeLimit != other.sizeLimit: if self.sizeLimit < other.sizeLimit: return -1 else: return 1 if self.splitSize != other.splitSize: if self.splitSize < other.splitSize: return -1 else: return 1 return 0 def _setSizeLimit(self, value): """ Property target used to set the size limit. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._sizeLimit = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._sizeLimit = value def _getSizeLimit(self): """ Property target used to get the size limit. """ return self._sizeLimit def _setSplitSize(self, value): """ Property target used to set the split size. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._splitSize = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._splitSize = value def _getSplitSize(self): """ Property target used to get the split size. """ return self._splitSize sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, split, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._split = None self.split = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.split) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.split != other.split: if self.split < other.split: return -1 else: return 1 return 0 def _setSplit(self, value): """ Property target used to set the split configuration value. If not C{None}, the value must be a C{SplitConfig} object. @raise ValueError: If the value is not a C{SplitConfig} """ if value is None: self._split = None else: if not isinstance(value, SplitConfig): raise ValueError("Value must be a C{SplitConfig} object.") self._split = value def _getSplit(self): """ Property target used to get the split configuration value. """ return self._split split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") def validate(self): """ Validates configuration represented by the object. Split configuration must be filled in. Within that, both the size limit and split size must be filled in. @raise ValueError: If one of the validations fails. """ if self.split is None: raise ValueError("Split section is required.") if self.split.sizeLimit is None: raise ValueError("Size limit must be set.") if self.split.splitSize is None: raise ValueError("Split size must be set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.split is not None: sectionNode = addContainerNode(xmlDom, parentNode, "split") addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the split configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._split = LocalConfig._parseSplit(parentNode) @staticmethod def _parseSplit(parent): """ Parses an split configuration section. We read the following individual fields:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ split = None section = readFirstChild(parent, "split") if section is not None: split = SplitConfig() split.sizeLimit = readByteQuantity(section, "size_limit") split.splitSize = readByteQuantity(section, "split_size") return split ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the split backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing split extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) for dailyDir in dailyDirs: _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the split extended action successfully.") ############################## # _splitDailyDir() function ############################## def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup): """ Splits large files in a daily staging directory. Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. All other files are split. @param dailyDir: Daily directory to encrypt @param sizeLimit: Size limit, in bytes @param splitSize: Split size, in bytes @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin splitting contents of [%s]." % dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: size = float(os.stat(path).st_size) if size > sizeLimit.bytes: _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) logger.debug("Completed splitting contents of [%s]." % dailyDir) ######################## # _splitFile() function ######################## def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False): """ Splits the source file into chunks of the indicated size. The split files will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully split. @param sourcePath: Absolute path of the source file to split @param splitSize: Encryption mode (only "gpg" is allowed) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @raise IOError: If there is a problem accessing, splitting or removing the source file. """ cwd = os.getcwd() try: if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) dirname = os.path.dirname(sourcePath) filename = os.path.basename(sourcePath) prefix = "%s_" % filename bytes = int(splitSize.bytes) # pylint: disable=W0622 os.chdir(dirname) # need to operate from directory that we want files written to command = resolveCommand(SPLIT_COMMAND) args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) if result != 0: raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) match = pattern.search(output[-1:][0]) if match is None: raise IOError("Unable to parse output from split command.") value = int(match.group(3).strip()) for index in range(0, value): path = "%s%05d" % (prefix, index) if not os.path.exists(path): raise IOError("After call to split, expected file [%s] does not exist." % path) changeOwnership(path, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s]." % sourcePath) except: raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) finally: os.chdir(cwd) CedarBackup2-2.22.0/CedarBackup2/action.py0000664000175000017500000000331311645150366021631 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: action.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides implementation of various backup-related actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup2.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup2.actions.collect import executeCollect from CedarBackup2.actions.stage import executeStage from CedarBackup2.actions.store import executeStore from CedarBackup2.actions.purge import executePurge from CedarBackup2.actions.rebuild import executeRebuild from CedarBackup2.actions.validate import executeValidate CedarBackup2-2.22.0/CedarBackup2/writer.py0000664000175000017500000000311311645150366021666 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: writer.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 CedarBackup2-2.22.0/CedarBackup2/__init__.py0000664000175000017500000000414511415155732022114 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements local and remote backups to CD or DVD media. Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2 import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack', 'peer', 'release', 'tools', 'util', 'writers', ] CedarBackup2-2.22.0/CedarBackup2/filesystem.py0000664000175000017500000017227511645150366022556 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: filesystem.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides filesystem-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides filesystem-related objects. @sort: FilesystemList, BackupFileList, PurgeItemList @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import math import logging import tarfile # Cedar Backup modules from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit from CedarBackup2.util import AbsolutePathList, UnorderedList, RegexList from CedarBackup2.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink ######################################################################## # Module-wide variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.filesystem") ######################################################################## # FilesystemList class definition ######################################################################## class FilesystemList(list): ###################### # Class documentation ###################### """ Represents a list of filesystem items. This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.). The custom methods such as L{addFile} will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like C{append()} or C{insert()}. No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them. Once a list has been created, callers can remove individual items from the list using standard methods like C{pop()} or C{remove()} or they can use custom methods to remove specific types of entries or entries which match a particular pattern. @note: Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with C{^} and end with C{$}. This is true whether we are matching a complete path or a basename. @note: Some platforms, like Windows, do not support soft links. On those platforms, the ignore-soft-links flag can be set, but it won't do any good because the operating system never reports a file as a soft link. @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, removeLinks, removeMatch, removeInvalid, normalize, excludeFiles, excludeDirs, excludeLinks, excludePaths, excludePatterns, excludeBasenamePatterns, ignoreFile """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" list.__init__(self) self._excludeFiles = False self._excludeDirs = False self._excludeLinks = False self._excludePaths = None self._excludePatterns = None self._excludeBasenamePatterns = None self._ignoreFile = None self.excludeFiles = False self.excludeLinks = False self.excludeDirs = False self.excludePaths = [] self.excludePatterns = RegexList() self.excludeBasenamePatterns = RegexList() self.ignoreFile = None ############# # Properties ############# def _setExcludeFiles(self, value): """ Property target used to set the exclude files flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeFiles = True else: self._excludeFiles = False def _getExcludeFiles(self): """ Property target used to get the exclude files flag. """ return self._excludeFiles def _setExcludeDirs(self, value): """ Property target used to set the exclude directories flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeDirs = True else: self._excludeDirs = False def _getExcludeDirs(self): """ Property target used to get the exclude directories flag. """ return self._excludeDirs def _setExcludeLinks(self, value): """ Property target used to set the exclude soft links flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeLinks = True else: self._excludeLinks = False def _getExcludeLinks(self): """ Property target used to get the exclude soft links flag. """ return self._excludeLinks def _setExcludePaths(self, value): """ Property target used to set the exclude paths list. A C{None} value is converted to an empty list. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If any list element is not an absolute path. """ self._excludePaths = AbsolutePathList() if value is not None: self._excludePaths.extend(value) def _getExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._excludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. A C{None} value is converted to an empty list. """ self._excludePatterns = RegexList() if value is not None: self._excludePatterns.extend(value) def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setExcludeBasenamePatterns(self, value): """ Property target used to set the exclude basename patterns list. A C{None} value is converted to an empty list. """ self._excludeBasenamePatterns = RegexList() if value is not None: self._excludeBasenamePatterns.extend(value) def _getExcludeBasenamePatterns(self): """ Property target used to get the exclude basename patterns list. """ return self._excludeBasenamePatterns def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns (matching complete path) to be excluded.") excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, None, "List of regular expression patterns (matching basename) to be excluded.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") ############## # Add methods ############## def addFile(self, path): """ Adds a file to the list. The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place. @param path: File path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a file or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) if not os.path.exists(path) or not os.path.isfile(path): logger.debug("Path [%s] is not a file or does not exist on disk." % path) raise ValueError("Path is not a file or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks." % path) return 0 if self.excludeFiles: logger.debug("Path [%s] is excluded based on excludeFiles." % path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths." % path) return 0 for pattern in self.excludePatterns: pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) return 0 self.append(path) logger.debug("Added file to list: [%s]" % path) return 1 def addDir(self, path): """ Adds a directory to the list. The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The L{ignoreFile} does not apply to this method, only to L{addDirContents}. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk." % path) raise ValueError("Path is not a directory or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks." % path) return 0 if self.excludeDirs: logger.debug("Path [%s] is excluded based on excludeDirs." % path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths." % path) return 0 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) return 0 self.append(path) logger.debug("Added directory to list: [%s]" % path) return 1 def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Indicates whether the directory itself should be added to the list. @type addSelf: Boolean value @param linkDepth: Maximum depth of the tree at which soft links should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference) def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False): """ Internal implementation of C{addDirContents}. This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard C{FilesystemList} behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, C{addDirContents} ends up being wholly implemented in terms of this method. The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a I{constant depth} starting from the top-most directory. There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name). @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list. @param includePath: Indicates whether to include the path as well as contents. @param recursive: Indicates whether directory contents should be added recursively. @param linkDepth: Depth of soft links that should be followed @param dereference: Indicates whether soft links, if followed, should be dereferenced @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. """ added = 0 if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk." % path) raise ValueError("Path is not a directory or does not exist on disk.") if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths." % path) return added for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) return added for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) return added if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): logger.debug("Path [%s] is excluded based on ignore file." % path) return added if includePath: added += self.addDir(path) # could actually be excluded by addDir, yet for entry in os.listdir(path): entrypath = os.path.join(path, entry) if os.path.isfile(entrypath): if linkDepth > 0 and dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self.addFile(derefpath) added += self.addFile(entrypath) elif os.path.isdir(entrypath): if os.path.islink(entrypath): if recursive: if linkDepth > 0: newDepth = linkDepth - 1 if dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) added += self.addDir(entrypath) else: added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) else: added += self.addDir(entrypath) else: added += self.addDir(entrypath) else: if recursive: newDepth = linkDepth - 1 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) else: added += self.addDir(entrypath) return added ################# # Remove methods ################# def removeFiles(self, pattern=None): """ Removes file entries from the list. If C{pattern} is not passed in or is C{None}, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting L{excludeFiles} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): self.remove(entry) logger.debug("Removed path [%s] from list." % entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list." % entry) removed += 1 logger.debug("Removed a total of %d entries." % removed) return removed def removeDirs(self, pattern=None): """ Removes directory entries from the list. If C{pattern} is not passed in or is C{None}, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting L{excludeDirs} to C{True} before adding items to the list (note that this will not prevent you from recursively adding the I{contents} of directories). @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): self.remove(entry) logger.debug("Removed path [%s] from list." % entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s]." % (entry, pattern)) removed += 1 logger.debug("Removed a total of %d entries." % removed) return removed def removeLinks(self, pattern=None): """ Removes soft link entries from the list. If C{pattern} is not passed in or is C{None}, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting L{excludeLinks} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): self.remove(entry) logger.debug("Removed path [%s] from list." % entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s]." % (entry, pattern)) removed += 1 logger.debug("Removed a total of %d entries." % removed) return removed def removeMatch(self, pattern): """ Removes from the list all entries matching a pattern. This method removes from the list all entries which match the passed in C{pattern}. Since there is no need to check the type of each entry, it is faster to call this method than to call the L{removeFiles}, L{removeDirs} or L{removeLinks} methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting L{excludePatterns} or L{excludeBasenamePatterns} before adding items to the list. @note: Unlike when using the exclude lists, the pattern here is I{not} bounded at the front and the back of the string. You can use any pattern you want. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed. @raise ValueError: If the passed-in pattern is not a valid regular expression. """ try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") removed = 0 for entry in self[:]: if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s]." % (entry, pattern)) removed += 1 logger.debug("Removed a total of %d entries." % removed) return removed def removeInvalid(self): """ Removes from the list all entries that do not exist on disk. This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories. @return: Number of entries removed. """ removed = 0 for entry in self[:]: if not os.path.exists(entry): self.remove(entry) logger.debug("Removed path [%s] from list." % entry) removed += 1 logger.debug("Removed a total of %d entries." % removed) return removed ################## # Utility methods ################## def normalize(self): """Normalizes the list, ensuring that each entry is unique.""" orig = len(self) self.sort() dups = filter(lambda x, self=self: self[x] == self[x+1], range(0, len(self) - 1)) items = map(lambda x, self=self: self[x], dups) map(self.remove, items) new = len(self) logger.debug("Completed normalizing list; removed %d items (%d originally, %d now)." % (new-orig, orig, new)) def verify(self): """ Verifies that all entries in the list exist on disk. @return: C{True} if all entries exist, C{False} otherwise. """ for entry in self: if not os.path.exists(entry): logger.debug("Path [%s] is invalid; list is not valid." % entry) return False logger.debug("All entries in list are valid.") return True ######################################################################## # SpanItem class definition ######################################################################## class SpanItem(object): # pylint: disable=R0903 """ Item returned by L{BackupFileList.generateSpan}. """ def __init__(self, fileList, size, capacity, utilization): """ Create object. @param fileList: List of files @param size: Size (in bytes) of files @param utilization: Utilization, as a percentage (0-100) """ self.fileList = fileList self.size = size self.capacity = capacity self.utilization = utilization ######################################################################## # BackupFileList class definition ######################################################################## class BackupFileList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files to be backed up. A BackupFileList is a L{FilesystemList} containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form. @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, generateFitted, generateTarfile, removeUnchanged """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ################################ # Overridden superclass methods ################################ def addDir(self, path): """ Adds a directory to the list. Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added. This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if os.path.isdir(path) and not os.path.islink(path): return 0 else: return FilesystemList.addDir(self, path) ################## # Utility methods ################## def totalSize(self): """ Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored. @return: Total size, in bytes """ total = 0.0 for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): total += float(os.stat(entry).st_size) return total def generateSizeMap(self): """ Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored. @return: Dictionary mapping file to file size """ table = { } for entry in self: if os.path.islink(entry): table[entry] = 0.0 elif os.path.isfile(entry): table[entry] = float(os.stat(entry).st_size) return table def generateDigestMap(self, stripPrefix=None): """ Generates a mapping from file to file digest. Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped. Entries which do not exist on disk are ignored. Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense. If C{stripPrefix} is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another. @param stripPrefix: Common prefix to be stripped from paths @type stripPrefix: String with any contents @return: Dictionary mapping file to digest value @see: L{removeUnchanged} """ table = { } if stripPrefix is not None: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) else: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) return table @staticmethod def _generateDigest(path): """ Generates an SHA digest for a given file on disk. The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:: sha.new(open(path).read()).hexdigest() Not surprisingly, this isn't an optimal solution. The U{Simple file hashing } Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the C{update()} method of the various Python hashing algorithms. In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up. Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable. @param path: Path to generate digest for. @return: ASCII-safe SHA digest for the file. @raise OSError: If the file cannot be opened. """ # pylint: disable=C0103 try: import hashlib s = hashlib.sha1() except ImportError: import sha s = sha.new() f = open(path, mode="rb") # in case platform cares about binary reads readBytes = 4096 # see notes above while(readBytes > 0): readString = f.read(readBytes) s.update(readString) readBytes = len(readString) f.close() digest = s.hexdigest() logger.debug("Generated digest [%s] for file [%s]." % (digest, path)) return digest def generateFitted(self, capacity, algorithm="worst_fit"): """ Generates a list of items that fit in the indicated capacity. Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: Copy of list with total size no larger than indicated capacity @raise ValueError: If the algorithm is invalid. """ table = self._getKnapsackTable() function = BackupFileList._getKnapsackFunction(algorithm) return function(table, capacity)[0] def generateSpan(self, capacity, algorithm="worst_fit"): """ Splits the list of items into sub-lists that fit in a given capacity. Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: List of L{SpanItem} objects. @raise ValueError: If the algorithm is invalid. @raise ValueError: If it's not possible to fit some items """ spanItems = [] function = BackupFileList._getKnapsackFunction(algorithm) table = self._getKnapsackTable(capacity) iteration = 0 while len(table) > 0: iteration += 1 fit = function(table, capacity) if len(fit[0]) == 0: # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe raise ValueError("After iteration %d, unable to add any new items." % iteration) removeKeys(table, fit[0]) utilization = (float(fit[1])/float(capacity))*100.0 item = SpanItem(fit[0], fit[1], capacity, utilization) spanItems.append(item) return spanItems def _getKnapsackTable(self, capacity=None): """ Converts the list into the form needed by the knapsack algorithms. @return: Dictionary mapping file name to tuple of (file path, file size). """ table = { } for entry in self: if os.path.islink(entry): table[entry] = (entry, 0.0) elif os.path.isfile(entry): size = float(os.stat(entry).st_size) if capacity is not None: if size > capacity: raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) table[entry] = (entry, size) return table @staticmethod def _getKnapsackFunction(algorithm): """ Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" @param algorithm: Name of the algorithm @return: Reference to knapsack function @raise ValueError: If the algorithm name is unknown. """ if algorithm == "first_fit": return firstFit elif algorithm == "best_fit": return bestFit elif algorithm == "worst_fit": return worstFit elif algorithm == "alternate_fit": return alternateFit else: raise ValueError("Algorithm [%s] is invalid." % algorithm) def generateTarfile(self, path, mode='tar', ignore=False, flat=False): """ Creates a tar file containing the files in the list. By default, this method will create uncompressed tar files. If you pass in mode C{'targz'}, then it will create gzipped tar files, and if you pass in mode C{'tarbz2'}, then it will create bzipped tar files. The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard". If you pass in C{flat=True}, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call L{removeInvalid()} and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built). If you want to, you can pass in C{ignore=True}, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself). We'll always attempt to remove the tarfile from disk if an exception will be thrown. @note: No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories. @note: The Python C{tarfile} module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives. @param path: Path of tar file to create on disk @type path: String representing a path on disk @param mode: Tar creation mode @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} @param ignore: Indicates whether to ignore certain errors. @type ignore: Boolean @param flat: Creates "flat" archive by putting all items in root @type flat: Boolean @raise ValueError: If mode is not valid @raise ValueError: If list is empty @raise ValueError: If the path could not be encoded properly. @raise TarError: If there is a problem creating the tar file """ # pylint: disable=E1101 path = encodePath(path) if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") if(mode == 'tar'): tarmode = "w:" elif(mode == 'targz'): tarmode = "w:gz" elif(mode == 'tarbz2'): tarmode = "w:bz2" else: raise ValueError("Mode [%s] is not valid." % mode) try: tar = tarfile.open(path, tarmode) try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for entry in self: try: if flat: tar.add(entry, arcname=os.path.basename(entry), recursive=False) else: tar.add(entry, recursive=False) except tarfile.TarError, e: if not ignore: raise e logger.info("Unable to add file [%s]; going on anyway." % entry) except OSError, e: if not ignore: raise tarfile.TarError(e) logger.info("Unable to add file [%s]; going on anyway." % entry) tar.close() except tarfile.ReadError, e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) except tarfile.TarError, e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise e def removeUnchanged(self, digestMap, captureDigest=False): """ Removes unchanged entries from the list. This method relies on a digest map as returned from L{generateDigestMap}. For each entry in C{digestMap}, if the entry also exists in the current list I{and} the entry in the current list has the same digest value as in the map, the entry in the current list will be removed. This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from C{generateDigestMap} at some point in time (perhaps the beginning of the week), and will save off that map using C{pickle} or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map. If C{captureDigest} is passed-in as C{True}, then digest information will be captured for the entire list before the removal step occurs using the same rules as in L{generateDigestMap}. The check will involve a lookup into the complete digest map. If C{captureDigest} is passed in as C{False}, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk. The return value varies depending on C{captureDigest}, as well. To preserve backwards compatibility, if C{captureDigest} is C{False}, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of C{(entries removed, digest map)}. The returned digest map will be in exactly the form returned by L{generateDigestMap}. @note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller. @param digestMap: Dictionary mapping file name to digest value. @type digestMap: Map as returned from L{generateDigestMap}. @param captureDigest: Indicates that digest information should be captured. @type captureDigest: Boolean @return: Number of entries removed """ if captureDigest: removed = 0 table = {} captured = {} for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) captured[entry] = table[entry] else: table[entry] = None for entry in digestMap.keys(): if table.has_key(entry): if table[entry] is not None: # equivalent to file/link check in other case digest = table[entry] if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s]." % entry) self[:] = table.keys() return (removed, captured) else: removed = 0 table = {} for entry in self: table[entry] = None for entry in digestMap.keys(): if table.has_key(entry): if os.path.isfile(entry) and not os.path.islink(entry): digest = BackupFileList._generateDigest(entry) if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s]." % entry) self[:] = table.keys() return removed ######################################################################## # PurgeItemList class definition ######################################################################## class PurgeItemList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files and directories to be purged. A PurgeItemList is a L{FilesystemList} containing a list of files and directories to be purged. On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem. The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in C{/opt/backup/collect}, that directory doesn't get removed once all of the files within it is gone. """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ############## # Add methods ############## def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (but I{not} the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Ignored in this subclass. @param linkDepth: Depth of soft links that should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference) ################## # Utility methods ################## def removeYoungFiles(self, daysOld): """ Removes from the list files younger than a certain age (in days). Any file whose "age" in days is less than (C{<}) the value of the C{daysOld} parameter will be removed from the list so that it will not be purged later when L{purgeItems} is called. Directories and soft links will be ignored. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. @note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items I{from the list}, not from the filesystem! It removes from the list those items that you would I{not} want to purge because they are too young. As an example, passing in C{daysOld} of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one. @param daysOld: Minimum age of files that are to be kept in the list. @type daysOld: Integer value >= 0. @return: Number of entries removed """ removed = 0 daysOld = int(daysOld) if daysOld < 0: raise ValueError("Days old value must be an integer >= 0.") for entry in self[:]: if os.path.isfile(entry) and not os.path.islink(entry): try: ageInDays = calculateFileAge(entry) ageInWholeDays = math.floor(ageInDays) if ageInWholeDays < daysOld: removed += 1 self.remove(entry) except OSError: pass return removed def purgeItems(self): """ Purges all items in the list. Every item in the list will be purged. Directories in the list will I{not} be purged recursively, and hence will only be removed if they are empty. Errors will be ignored. To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories. @return: Tuple containing count of (files, dirs) removed """ files = 0 dirs = 0 for entry in self: if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): try: os.remove(entry) files += 1 logger.debug("Purged file [%s]." % entry) except OSError: pass for entry in self: if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): try: os.rmdir(entry) dirs += 1 logger.debug("Purged empty directory [%s]." % entry) except OSError: pass return (files, dirs) ######################################################################## # Public functions ######################################################################## ########################## # normalizeDir() function ########################## def normalizeDir(path): """ Normalizes a directory name. For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. @param path: Path to be normalized. @type path: String representing a path on disk @return: Normalized path, which should be equivalent to the original. """ if path != os.sep and path[-1:] == os.sep: return path[:-1] return path ############################# # compareContents() function ############################# def compareContents(path1, path2, verbose=False): """ Compares the contents of two directories to see if they are equivalent. The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories. This is all relatively simple to implement through the magic of L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two. If no exception is thrown, the two directories are considered identical. If the C{verbose} flag is C{True}, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown C{ValueError} exception distinguishes between the directories containing different files, and containing the same files with differing content. @note: Symlinks are I{not} followed for the purposes of this comparison. @param path1: First path to compare. @type path1: String representing a path on disk @param path2: First path to compare. @type path2: String representing a path on disk @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If a directory doesn't exist or can't be read. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is an unusual problem reading the directories. """ try: path1List = BackupFileList() path1List.addDirContents(path1) path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) path2List = BackupFileList() path2List.addDirContents(path2) path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) compareDigestMaps(path1Digest, path2Digest, verbose) except IOError, e: logger.error("I/O error encountered during consistency check.") raise e def compareDigestMaps(digest1, digest2, verbose=False): """ Compares two digest maps and throws an exception if they differ. @param digest1: First digest to compare. @type digest1: Digest as returned from BackupFileList.generateDigestMap() @param digest2: Second digest to compare. @type digest2: Digest as returned from BackupFileList.generateDigestMap() @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If the two directories are not equivalent. """ if not verbose: if digest1 != digest2: raise ValueError("Consistency check failed.") else: list1 = UnorderedList(digest1.keys()) list2 = UnorderedList(digest2.keys()) if list1 != list2: raise ValueError("Directories contain a different set of files.") for key in list1: if digest1[key] != digest2[key]: raise ValueError("File contents for [%s] vary between directories." % key) CedarBackup2-2.22.0/CedarBackup2/image.py0000664000175000017500000000257311645150366021445 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: image.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## from CedarBackup2.writers.util import IsoImage # pylint: disable=W0611 CedarBackup2-2.22.0/CedarBackup2/testutil.py0000664000175000017500000004374512122614501022231 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: testutil.py 1023 2011-10-11 23:44:50Z pronovic $ # Purpose : Provides unit-testing utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides unit-testing utilities. These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree. Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like L{removedir}) do what they are supposed to, but I don't want responsibility for making them available to others. @sort: findResources, commandAvailable, buildPath, removedir, extractTar, changeFileAge, getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, platformDebian, platformMacOsX, platformCygwin, platformWindows, platformHasEcho, platformSupportsLinks, platformSupportsPermissions, platformRequiresBinaryRead @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import tarfile import time import getpass import random import string # pylint: disable=W0402 import platform import logging from StringIO import StringIO from CedarBackup2.util import encodePath, executeCommand from CedarBackup2.config import Config, OptionsConfig from CedarBackup2.customize import customizeOverrides from CedarBackup2.cli import setupPathResolver ######################################################################## # Public functions ######################################################################## ############################## # setupDebugLogger() function ############################## def setupDebugLogger(): """ Sets up a screen logger for debugging purposes. Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that. """ logger = logging.getLogger("CedarBackup2") logger.setLevel(logging.DEBUG) # let the logger see all messages formatter = logging.Formatter(fmt="%(message)s") handler = logging.StreamHandler(strm=sys.stdout) handler.setFormatter(formatter) handler.setLevel(logging.DEBUG) logger.addHandler(handler) ################# # setupOverrides ################# def setupOverrides(): """ Set up any platform-specific overrides that might be required. When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for. Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work. """ config = Config() config.options = OptionsConfig() if platformDebian(): customizeOverrides(config, platform="debian") else: customizeOverrides(config, platform="standard") setupPathResolver(config) ########################### # findResources() function ########################### def findResources(resources, dataDirs): """ Returns a dictionary of locations for various resources. @param resources: List of required resources. @param dataDirs: List of data directories to search within for resources. @return: Dictionary mapping resource name to resource path. @raise Exception: If some resource cannot be found. """ mapping = { } for resource in resources: for resourceDir in dataDirs: path = os.path.join(resourceDir, resource) if os.path.exists(path): mapping[resource] = path break else: raise Exception("Unable to find resource [%s]." % resource) return mapping ############################## # commandAvailable() function ############################## def commandAvailable(command): """ Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms. @param command: Commang to search for @return: Boolean true/false depending on whether command is available. """ if os.environ.has_key("PATH"): for path in os.environ["PATH"].split(os.sep): if os.path.exists(os.path.join(path, command)): return True return False ####################### # buildPath() function ####################### def buildPath(components): """ Builds a complete path from a list of components. For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. @param components: List of components. @returns: String path constructed from components. @raise ValueError: If a path cannot be encoded properly. """ path = components[0] for component in components[1:]: path = os.path.join(path, component) return encodePath(path) ####################### # removedir() function ####################### def removedir(tree): """ Recursively removes an entire directory. This is basically taken from an example on python.com. @param tree: Directory tree to remove. @raise ValueError: If a path cannot be encoded properly. """ tree = encodePath(tree) for root, dirs, files in os.walk(tree, topdown=False): for name in files: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isfile(path): os.remove(path) for name in dirs: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isdir(path): os.rmdir(path) os.rmdir(tree) ######################## # extractTar() function ######################## def extractTar(tmpdir, filepath): """ Extracts the indicated tar file to the indicated tmpdir. @param tmpdir: Temp directory to extract to. @param filepath: Path to tarfile to extract. @raise ValueError: If a path cannot be encoded properly. """ # pylint: disable=E1101 tmpdir = encodePath(tmpdir) filepath = encodePath(filepath) tar = tarfile.open(filepath) try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for tarinfo in tar: tar.extract(tarinfo, tmpdir) ########################### # changeFileAge() function ########################### def changeFileAge(filename, subtract=None): """ Changes a file age using the C{os.utime} function. @note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be I{at least} as old as requested later on. @param filename: File to operate on. @param subtract: Number of seconds to subtract from the current time. @raise ValueError: If a path cannot be encoded properly. """ filename = encodePath(filename) newTime = time.time() - 1 if subtract is not None: newTime -= subtract os.utime(filename, (newTime, newTime)) ########################### # getMaskAsMode() function ########################### def getMaskAsMode(): """ Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. @return: Umask converted to a mode, as an integer. """ umask = os.umask(0777) os.umask(umask) return int(~umask & 0777) # invert, then use only lower bytes ###################### # getLogin() function ###################### def getLogin(): """ Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway. """ return getpass.getuser() ############################ # randomFilename() function ############################ def randomFilename(length, prefix=None, suffix=None): """ Generates a random filename with the given length. @param length: Length of filename. @return Random filename. """ characters = [None] * length for i in xrange(length): characters[i] = random.choice(string.ascii_uppercase) if prefix is None: prefix = "" if suffix is None: suffix = "" return "%s%s%s" % (prefix, "".join(characters), suffix) #################################### # failUnlessAssignRaises() function #################################### def failUnlessAssignRaises(testCase, exception, obj, prop, value): """ Equivalent of C{failUnlessRaises}, but used for property assignments instead. It's nice to be able to use C{failUnlessRaises} to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods. This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work. Let's assume you make this method call:: testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) If you do this, a test case failure will be raised unless the assignment:: collectDir.absolutePath = absolutePath fails with a C{ValueError} exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised. @note: Internally, the C{missed} and C{instead} variables are used rather than directly calling C{testCase.fail} upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general C{except} clause. @param testCase: PyUnit test case object (i.e. self). @param exception: Exception that is expected to be raised. @param obj: Object whose property is to be assigned to. @param prop: Name of the property, as a string. @param value: Value that is to be assigned to the property. @see: C{unittest.TestCase.failUnlessRaises} """ missed = False instead = None try: exec "obj.%s = value" % prop # pylint: disable=W0122 missed = True except exception: pass except Exception, e: instead = e if missed: testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) if instead is not None: testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__)) ########################### # captureOutput() function ########################### def captureOutput(c): """ Captures the output (stdout, stderr) of a function or a method. Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output. This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than C{stdout} or C{stderr}. @note: This method assumes that C{callable} doesn't take any arguments besides keyword argument C{fd} to specify the file descriptor. @param c: Callable function or method. @return: Output of function, as one big string. """ fd = StringIO() c(fd=fd) result = fd.getvalue() fd.close() return result ######################### # _isPlatform() function ######################### def _isPlatform(name): """ Returns boolean indicating whether we're running on the indicated platform. @param name: Platform name to check, currently one of "windows" or "macosx" """ if name == "windows": return platform.platform(True, True).startswith("Windows") elif name == "macosx": return sys.platform == "darwin" elif name == "debian": return platform.platform(False, False).find("debian") > 0 elif name == "cygwin": return platform.platform(True, True).startswith("CYGWIN") else: raise ValueError("Unknown platform [%s]." % name) ############################ # platformDebian() function ############################ def platformDebian(): """ Returns boolean indicating whether this is the Debian platform. """ return _isPlatform("debian") ############################ # platformMacOsX() function ############################ def platformMacOsX(): """ Returns boolean indicating whether this is the Mac OS X platform. """ return _isPlatform("macosx") ############################# # platformWindows() function ############################# def platformWindows(): """ Returns boolean indicating whether this is the Windows platform. """ return _isPlatform("windows") ############################ # platformCygwin() function ############################ def platformCygwin(): """ Returns boolean indicating whether this is the Cygwin platform. """ return _isPlatform("cygwin") ################################### # platformSupportsLinks() function ################################### def platformSupportsLinks(): """ Returns boolean indicating whether the platform supports soft-links. Some platforms, like Windows, do not support links, and tests need to take this into account. """ return not platformWindows() ######################################### # platformSupportsPermissions() function ######################################### def platformSupportsPermissions(): """ Returns boolean indicating whether the platform supports UNIX-style file permissions. Some platforms, like Windows, do not support permissions, and tests need to take this into account. """ return not platformWindows() ######################################## # platformRequiresBinaryRead() function ######################################## def platformRequiresBinaryRead(): """ Returns boolean indicating whether the platform requires binary reads. Some platforms, like Windows, require a special flag to read binary data from files. """ return platformWindows() ############################# # platformHasEcho() function ############################# def platformHasEcho(): """ Returns boolean indicating whether the platform has a sensible echo command. On some platforms, like Windows, echo doesn't really work for tests. """ return not platformWindows() ########################### # runningAsRoot() function ########################### def runningAsRoot(): """ Returns boolean indicating whether the effective user id is root. This is always true on platforms that have no concept of root, like Windows. """ if platformWindows(): return True else: return os.geteuid() == 0 ############################## # availableLocales() function ############################## def availableLocales(): """ Returns a list of available locales on the system @return: List of string locale names """ locales = [] output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] for line in output: locales.append(line.rstrip()) return locales #################################### # hexFloatLiteralAllowed() function #################################### def hexFloatLiteralAllowed(): """ Indicates whether hex float literals are allowed by the interpreter. As far back as 2004, some Python documentation indicated that octal and hex notation applied only to integer literals. However, prior to Python 2.5, it was legal to construct a float with an argument like 0xAC on some platforms. This check provides a an indication of whether the current interpreter supports that behavior. This check exists so that unit tests can continue to test the same thing as always for pre-2.5 interpreters (i.e. making sure backwards compatibility doesn't break) while still continuing to work for later interpreters. The returned value is True if hex float literals are allowed, False otherwise. """ if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5] and not platformWindows(): return True return False CedarBackup2-2.22.0/CedarBackup2/tools/0002775000175000017500000000000012143054371021135 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/CedarBackup2/tools/span.py0000775000175000017500000006036611645144635022475 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: span.py 999 2010-07-07 19:58:25Z pronovic $ # Purpose : Spans staged data among multiple discs # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Spans staged data among multiple discs This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media. Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## # System modules import sys import os import logging import tempfile # Cedar Backup modules from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup2.util import displayBytes, convertSize, mount, unmount from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES from CedarBackup2.config import Config from CedarBackup2.filesystem import BackupFileList, compareDigestMaps, normalizeDir from CedarBackup2.cli import Options, setupLogging, setupPathResolver from CedarBackup2.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE from CedarBackup2.actions.constants import STORE_INDICATOR from CedarBackup2.actions.util import createWriter from CedarBackup2.actions.store import writeIndicatorFile from CedarBackup2.actions.util import findDailyDirs ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.tools.span") ####################################################################### # SpanOptions class ####################################################################### class SpanOptions(Options): """ Tool-specific command-line options. Most of the cback command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions. Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this. """ def validate(self): """ Validates command-line options represented by the object. There are no validations here, because we don't use any actions. @raise ValueError: If one of the validations fails. """ pass ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback-span} script. Essentially, this is the "main routine" for the cback-span script. It does all of the argument processing for the script, and then also implements the tool functionality. This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 2.5 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing other parts of the script @note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively. @return: Error code as described above. """ try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5]: sys.stderr.write("Python version 2.5 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python version 2.5 or greater required.\n") return 1 try: options = SpanOptions(argumentList=sys.argv[1:]) except Exception, e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 try: logfile = setupLogging(options) except Exception, e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup 'span' utility run started.") logger.info("Options were [%s]" % options) logger.info("Logfile is [%s]" % logfile) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config try: logger.info("Configuration path is [%s]" % configPath) config = Config(xmlPath=configPath) setupPathResolver(config) except Exception, e: logger.error("Error reading or handling configuration: %s" % e) logger.info("Cedar Backup 'span' utility run completed with status 4.") return 4 if options.stacktrace: _executeAction(options, config) else: try: _executeAction(options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup 'span' utility run completed with status 5.") return 5 except Exception, e: logger.error("Error executing backup: %s" % e) logger.info("Cedar Backup 'span' utility run completed with status 6.") return 6 logger.info("Cedar Backup 'span' utility run completed with status 0.") return 0 ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback-span [switches]\n") fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write("\n") fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") fd.write(" It is a utility, not an extension, and requires user interaction.\n") fd.write("\n") fd.write(" The following switches are accepted, mostly to set up underlying\n") fd.write(" Cedar Backup functionality:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ############################ # _executeAction() function ############################ def _executeAction(options, config): """ Implements the guts of the cback-span tool. @param options: Program command-line options. @type options: SpanOptions object. @param config: Program configuration. @type config: Config object. @raise Exception: Under many generic error conditions """ print "" print "================================================" print " Cedar Backup 'span' tool" print "================================================" print "" print "This the Cedar Backup span tool. It is used to split up staging" print "data when that staging data does not fit onto a single disc." print "" print "This utility operates using Cedar Backup configuration. Configuration" print "specifies which staging directory to look at and which writer device" print "and media type to use." print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" print "" print "Cedar Backup store configuration looks like this:" print "" print " Source Directory...: %s" % config.store.sourceDir print " Media Type.........: %s" % config.store.mediaType print " Device Type........: %s" % config.store.deviceType print " Device Path........: %s" % config.store.devicePath print " Device SCSI ID.....: %s" % config.store.deviceScsiId print " Drive Speed........: %s" % config.store.driveSpeed print " Check Data Flag....: %s" % config.store.checkData print " No Eject Flag......: %s" % config.store.noEject print "" if not _getYesNoAnswer("Is this OK?", default="Y"): return print "===" (writer, mediaCapacity) = _getWriter(config) print "" print "Please wait, indexing the source directory (this may take a while)..." (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) print "===" print "" print "The following daily staging directories have not yet been written to disc:" print "" for dailyDir in dailyDirs: print " %s" % dailyDir totalSize = fileList.totalSize() print "" print "The total size of the data in these directories is %s." % displayBytes(totalSize) print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" print "" print "Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity) print "" print "Since estimates are not perfect and there is some uncertainly in" print "media capacity calculations, it is good to have a \"cushion\"," print "a percentage of capacity to set aside. The cushion reduces the" print "capacity of your media, so a 1.5% cushion leaves 98.5% remaining." print "" cushion = _getFloat("What cushion percentage?", default=4.5) print "===" realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity minimumDiscs = (totalSize/realCapacity) + 1 print "" print "The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity)) print "It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize)) print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" happy = False while not happy: print "" print "Which algorithm do you want to use to span your data across" print "multiple discs?" print "" print "The following algorithms are available:" print "" print " first....: The \"first-fit\" algorithm" print " best.....: The \"best-fit\" algorithm" print " worst....: The \"worst-fit\" algorithm" print " alternate: The \"alternate-fit\" algorithm" print "" print "If you don't like the results you will have a chance to try a" print "different one later." print "" algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) print "===" print "" print "Please wait, generating file lists (this may take a while)..." spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) print "===" print "" print "Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm print "into %d discs." % len(spanSet) print "" counter = 0 for item in spanSet: counter += 1 print "Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), displayBytes(item.size), item.utilization) print "" if _getYesNoAnswer("Accept this solution?", default="Y"): happy = True print "===" counter = 0 for spanItem in spanSet: counter += 1 if counter == 1: print "" _getReturn("Please place the first disc in your backup device.\nPress return when ready.") print "===" else: print "" _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print "===" _writeDisc(config, writer, spanItem) _writeStoreIndicator(config, dailyDirs) print "" print "Completed writing all discs." ############################ # _findDailyDirs() function ############################ def _findDailyDirs(stagingDir): """ Returns a list of all daily staging directories that have not yet been stored. The store indicator file C{cback.store} will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file. Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories. @param stagingDir: Configured staging directory @return: Tuple (staging dirs, backup file list) """ results = findDailyDirs(stagingDir, STORE_INDICATOR) fileList = BackupFileList() for item in results: fileList.addDirContents(item) return (results, fileList) ################################## # _writeStoreIndicator() function ################################## def _writeStoreIndicator(config, dailyDirs): """ Writes a store indicator file into daily directories. @param config: Config object. @param dailyDirs: List of daily directories """ for dailyDir in dailyDirs: writeIndicatorFile(dailyDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ######################## # _getWriter() function ######################## def _getWriter(config): """ Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes. @param config: Cedar Backup configuration @return: Tuple of (writer, mediaCapacity) """ writer = createWriter(config) mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) return (writer, mediaCapacity) ######################## # _writeDisc() function ######################## def _writeDisc(config, writer, spanItem): """ Writes a span item to disc. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ print "" _discInitializeImage(config, writer, spanItem) _discWriteImage(config, writer) _discConsistencyCheck(config, writer, spanItem) print "Write process is complete." print "===" def _discInitializeImage(config, writer, spanItem): """ Initialize an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ complete = False while not complete: try: print "Initializing image..." writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) for path in spanItem.fileList: graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) writer.addImageEntry(path, graftPoint) complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Failed to initialize image: %s" % e) if not _getYesNoAnswer("Retry initialization step?", default="Y"): raise e print "Ok, attempting retry." print "===" print "Completed initializing image." def _discWriteImage(config, writer): """ Writes a ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use """ complete = False while not complete: try: print "Writing image to disc..." writer.writeImage() complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Failed to write image: %s" % e) if not _getYesNoAnswer("Retry this step?", default="Y"): raise e print "Ok, attempting retry." _getReturn("Please replace media if needed.\nPress return when ready.") print "===" print "Completed writing image." def _discConsistencyCheck(config, writer, spanItem): """ Run a consistency check on an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ if config.store.checkData: complete = False while not complete: try: print "Running consistency check..." _consistencyCheck(config, spanItem.fileList) complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Consistency check failed: %s" % e) if not _getYesNoAnswer("Retry the consistency check?", default="Y"): raise e if _getYesNoAnswer("Rewrite the disc first?", default="N"): print "Ok, attempting retry." _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print "===" _discWriteImage(config, writer) else: print "Ok, attempting retry." print "===" print "Completed consistency check." ############################### # _consistencyCheck() function ############################### def _consistencyCheck(config, fileList): """ Runs a consistency check against media in the backup device. The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical. If no exceptions are thrown, there were no problems with the consistency check. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param fileList: BackupFileList whose contents to check against @raise ValueError: If the check fails @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") discList = BackupFileList() discList.addDirContents(mountPoint) sourceList = BackupFileList() sourceList.extend(fileList) discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) compareDigestMaps(sourceListDigest, discListDigest, verbose=True) logger.info("Consistency check completed. No problems found.") finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################### # User interface utilities ######################################################################## def _getYesNoAnswer(prompt, default): """ Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is blank @return: Boolean true/false corresponding to Y/N """ if default == "Y": prompt = "%s [Y/n]: " % prompt else: prompt = "%s [y/N]: " % prompt answer = raw_input(prompt) if answer in [ None, "", ]: answer = default if answer[0] in [ "Y", "y", ]: return True else: return False def _getChoiceAnswer(prompt, default, validChoices): """ Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @param validChoices: List of valid choices (strings) @return: Valid choice from user. """ prompt = "%s [%s]: " % (prompt, default) answer = raw_input(prompt) if answer in [ None, "", ]: answer = default while answer not in validChoices: print "Choice must be one of %s" % validChoices answer = raw_input(prompt) return answer def _getFloat(prompt, default): """ Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @return: Floating point number from user """ prompt = "%s [%.2f]: " % (prompt, default) while True: answer = raw_input(prompt) if answer in [ None, "" ]: return default else: try: return float(answer) except ValueError: print "Enter a floating point number." def _getReturn(prompt): """ Get a return key from the user. @param prompt: Prompt to show. """ raw_input(prompt) ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": result = cli() sys.exit(result) CedarBackup2-2.22.0/CedarBackup2/tools/__init__.py0000664000175000017500000000342311415155732023252 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Official Cedar Backup Tools # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Tools This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input). Tools are usually scripts that are run directly from the command line, just like the main C{cback} script. Like the C{cback} script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's C{cli()} function. The actual scripts for tools are distributed in the util/ directory. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.tools import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'span', ] CedarBackup2-2.22.0/CedarBackup2/customize.py0000664000175000017500000000671711415155732022406 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: customize.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Implements customized behavior. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements customized behavior. Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.customize") PLATFORM = "standard" #PLATFORM = "debian" DEBIAN_CDRECORD = "/usr/bin/wodim" DEBIAN_MKISOFS = "/usr/bin/genisoimage" ####################################################################### # Public functions ####################################################################### ################################ # customizeOverrides() function ################################ def customizeOverrides(config, platform=PLATFORM): """ Modify command overrides based on the configured platform. On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want. @param config: Configuration to modify @param platform: Platform that is in use """ if platform == "debian": logger.info("Overriding cdrecord for Debian platform: %s" % DEBIAN_CDRECORD) config.options.addOverride("cdrecord", DEBIAN_CDRECORD) logger.info("Overriding mkisofs for Debian platform: %s" % DEBIAN_MKISOFS) config.options.addOverride("mkisofs", DEBIAN_MKISOFS) CedarBackup2-2.22.0/CedarBackup2/config.py0000664000175000017500000067632212143053141021624 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: config.py 1041 2013-05-10 02:05:13Z pronovic $ # Purpose : Provides configuration-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides configuration-related objects. Summary ======= Cedar Backup stores all of its configuration in an XML document typically called C{cback.conf}. The standard location for this document is in C{/etc}, but users can specify a different location if they want to. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. The C{Config} class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files. Backwards Compatibility ======================= The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software. XML Configuration Structure =========================== A C{Config} object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input I{must} result in a C{Config} object which passes the validations laid out below in the I{Validation} section. An XML configuration file is composed of seven sections: - I{reference}: specifies reference information about the file (author, revision, etc) - I{extensions}: specifies mappings to Cedar Backup extensions (external code) - I{options}: specifies global configuration options - I{peers}: specifies the set of peers in a master's backup pool - I{collect}: specifies configuration related to the collect action - I{stage}: specifies configuration related to the stage action - I{store}: specifies configuration related to the store action - I{purge}: specifies configuration related to the purge action Each section is represented by an class in this module, and then the overall C{Config} class is a composition of the various other classes. Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to C{None} in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the I{Validation} section below). Unicode vs. String Data ======================= By default, all string data that comes out of XML documents in Python is unicode data (i.e. C{u"whatever"}). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using L{util.encodePath}. The main exception is the various C{absoluteExcludePath} and C{relativeExcludePath} lists. These are I{not} converted, because they are generally only used for filtering, not for filesystem operations. Validation ========== There are two main levels of validation in the C{Config} class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to configuration class fields. The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Config.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Config} object from XML data and before exporting a C{Config} object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files. The L{Config.validate} implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults. The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to C{None} is validated according to the rules for that section (see below). I{Reference Validations} No validations. I{Extensions Validations} The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured. I{Options Validations} All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. I{Peers Validations} Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. I{Collect Validations} The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. I{Stage Validations} The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either C{None} or an empty list C{[]} if desired. If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply. I{Store Validations} The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK). The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. I{Purge Validations} The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE, VALID_DEVICE_TYPES, VALID_MEDIA_TYPES, VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, VALID_ORDER_MODES @var DEFAULT_DEVICE_TYPE: The default device type. @var DEFAULT_MEDIA_TYPE: The default media type. @var VALID_DEVICE_TYPES: List of valid device types. @var VALID_MEDIA_TYPES: List of valid media types. @var VALID_COLLECT_MODES: List of valid collect modes. @var VALID_COMPRESS_MODES: List of valid compress modes. @var VALID_ARCHIVE_MODES: List of valid archive modes. @var VALID_ORDER_MODES: List of valid extension order modes. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString from CedarBackup2.util import RegexMatchList, RegexList, encodePath, checkUnique from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild from CedarBackup2.xmlutil import readStringList, readString, readInteger, readBoolean from CedarBackup2.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode from CedarBackup2.xmlutil import createInputDom, createOutputDom, serializeDom ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.config") DEFAULT_DEVICE_TYPE = "cdwriter" DEFAULT_MEDIA_TYPE = "cdrw-74" VALID_DEVICE_TYPES = [ "cdwriter", "dvdwriter", ] VALID_CD_MEDIA_TYPES = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] VALID_MEDIA_TYPES = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES VALID_COLLECT_MODES = [ "daily", "weekly", "incr", ] VALID_ARCHIVE_MODES = [ "tar", "targz", "tarbz2", ] VALID_COMPRESS_MODES = [ "none", "gzip", "bzip2", ] VALID_ORDER_MODES = [ "index", "dependency", ] VALID_BLANK_MODES = [ "daily", "weekly", ] VALID_BYTE_UNITS = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ] VALID_FAILURE_MODES = [ "none", "all", "daily", "weekly", ] REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] ACTION_NAME_REGEX = r"^[a-z0-9]*$" ######################################################################## # ByteQuantity class definition ######################################################################## class ByteQuantity(object): """ Class representing a byte quantity. A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py. The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context. @sort: __init__, __repr__, __str__, __cmp__, quantity, units """ def __init__(self, quantity=None, units=None): """ Constructor for the C{ByteQuantity} class. @param quantity: Quantity of bytes, as string ("1.25") @param units: Unit of bytes, one of VALID_BYTE_UNITS @raise ValueError: If one of the values is invalid. """ self._quantity = None self._units = None self.quantity = quantity self.units = units def __repr__(self): """ Official string representation for class instance. """ return "ByteQuantity(%s, %s)" % (self.quantity, self.units) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.quantity != other.quantity: if self.quantity < other.quantity: return -1 else: return 1 if self.units != other.units: if self.units < other.units: return -1 else: return 1 return 0 def _setQuantity(self, value): """ Property target used to set the quantity The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Quantity must be a non-empty string.") floatValue = float(value) if floatValue < 0.0: raise ValueError("Quantity cannot be negative.") self._quantity = value # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _setUnits(self, value): """ Property target used to set the units value. If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_BYTE_UNITS: raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) self._units = value def _getUnits(self): """ Property target used to get the units value. """ return self._units def _getBytes(self): """ Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None and self.units is not None: return convertSize(self.quantity, self.units, UNIT_BYTES) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.") ######################################################################## # ActionDependencies class definition ######################################################################## class ActionDependencies(object): """ Class representing dependencies associated with an extended action. Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action. The following restrictions exist on data in this class: - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, beforeList, afterList """ def __init__(self, beforeList=None, afterList=None): """ Constructor for the C{ActionDependencies} class. @param beforeList: List of named actions that this action must be run before @param afterList: List of named actions that this action must be run after @raise ValueError: If one of the values is invalid. """ self._beforeList = None self._afterList = None self.beforeList = beforeList self.afterList = afterList def __repr__(self): """ Official string representation for class instance. """ return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.beforeList != other.beforeList: if self.beforeList < other.beforeList: return -1 else: return 1 if self.afterList != other.afterList: if self.afterList < other.afterList: return -1 else: return 1 return 0 def _setBeforeList(self, value): """ Property target used to set the "run before" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._beforeList = None else: try: saved = self._beforeList self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._beforeList.extend(value) except Exception, e: self._beforeList = saved raise e def _getBeforeList(self): """ Property target used to get the "run before" list. """ return self._beforeList def _setAfterList(self, value): """ Property target used to set the "run after" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._afterList = None else: try: saved = self._afterList self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._afterList.extend(value) except Exception, e: self._afterList = saved raise e def _getAfterList(self): """ Property target used to get the "run after" list. """ return self._afterList beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.") ######################################################################## # ActionHook class definition ######################################################################## class ActionHook(object): """ Class representing a hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. The following restrictions exist on data in this class: - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The shell command must be a non-empty string. The internal C{before} and C{after} instance variables are always set to False in this parent class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{ActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ self._action = None self._command = None self._before = False self._after = False self.action = action self.command = command def __repr__(self): """ Official string representation for class instance. """ return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.action != other.action: if self.action < other.action: return -1 else: return 1 if self.command != other.command: if self.command < other.command: return -1 else: return 1 if self.before != other.before: if self.before < other.before: return -1 else: return 1 if self.after != other.after: if self.after < other.after: return -1 else: return 1 return 0 def _setAction(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._action = value def _getAction(self): """ Property target used to get the action name. """ return self._action def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _getBefore(self): """ Property target used to get the before flag. """ return self._before def _getAfter(self): """ Property target used to get the after flag. """ return self._after action = property(_getAction, _setAction, None, "Action this hook is associated with.") command = property(_getCommand, _setCommand, None, "Shell command to execute.") before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") after = property(_getAfter, None, None, "Indicates whether command should be executed after action.") class PreActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PreActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._before = True def __repr__(self): """ Official string representation for class instance. """ return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) class PostActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PostActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._after = True def __repr__(self): """ Official string representation for class instance. """ return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) ######################################################################## # BlankBehavior class definition ######################################################################## class BlankBehavior(object): """ Class representing optimized store-action media blanking behavior. The following restrictions exist on data in this class: - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} - The blanking factor must be a positive floating point number @sort: __init__, __repr__, __str__, __cmp__, blankMode, blankFactor """ def __init__(self, blankMode=None, blankFactor=None): """ Constructor for the C{BlankBehavior} class. @param blankMode: Blanking mode @param blankFactor: Blanking factor @raise ValueError: If one of the values is invalid. """ self._blankMode = None self._blankFactor = None self.blankMode = blankMode self.blankFactor = blankFactor def __repr__(self): """ Official string representation for class instance. """ return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.blankMode != other.blankMode: if self.blankMode < other.blankMode: return -1 else: return 1 if self.blankFactor != other.blankFactor: if self.blankFactor < other.blankFactor: return -1 else: return 1 return 0 def _setBlankMode(self, value): """ Property target used to set the blanking mode. The value must be one of L{VALID_BLANK_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_BLANK_MODES: raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) self._blankMode = value def _getBlankMode(self): """ Property target used to get the blanking mode. """ return self._blankMode def _setBlankFactor(self, value): """ Property target used to set the blanking factor. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Blanking factor must be a non-empty string.") floatValue = float(value) if floatValue < 0.0: raise ValueError("Blanking factor cannot be negative.") self._blankFactor = value # keep around string def _getBlankFactor(self): """ Property target used to get the blanking factor. """ return self._blankFactor blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor") ######################################################################## # ExtendedAction class definition ######################################################################## class ExtendedAction(object): """ Class representing an extended action. Essentially, an extended action needs to allow the following to happen:: exec("from %s import %s" % (module, function)) exec("%s(action, configPath")" % function) The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The module must be a non-empty string and a valid Python identifier. - The function must be an on-empty string and a valid Python identifier. - If set, the index must be a positive integer. - If set, the dependencies attribute must be an C{ActionDependencies} object. @sort: __init__, __repr__, __str__, __cmp__, name, module, function, index, dependencies """ def __init__(self, name=None, module=None, function=None, index=None, dependencies=None): """ Constructor for the C{ExtendedAction} class. @param name: Name of the extended action @param module: Name of the module containing the extended action function @param function: Name of the extended action function @param index: Index of action, used for execution ordering @param dependencies: Dependencies for action, used for execution ordering @raise ValueError: If one of the values is invalid. """ self._name = None self._module = None self._function = None self._index = None self._dependencies = None self.name = name self.module = module self.function = function self.index = index self.dependencies = dependencies def __repr__(self): """ Official string representation for class instance. """ return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.module != other.module: if self.module < other.module: return -1 else: return 1 if self.function != other.function: if self.function < other.function: return -1 else: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 if self.dependencies != other.dependencies: if self.dependencies < other.dependencies: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._name = value def _getName(self): """ Property target used to get the action name. """ return self._name def _setModule(self, value): """ Property target used to set the module name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") if value is not None: if len(value) < 1: raise ValueError("The module name must be a non-empty string.") if not pattern.search(value): raise ValueError("The module name must be a valid Python identifier.") self._module = value def _getModule(self): """ Property target used to get the module name. """ return self._module def _setFunction(self, value): """ Property target used to set the function name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") if value is not None: if len(value) < 1: raise ValueError("The function name must be a non-empty string.") if not pattern.search(value): raise ValueError("The function name must be a valid Python identifier.") self._function = value def _getFunction(self): """ Property target used to get the function name. """ return self._function def _setIndex(self, value): """ Property target used to set the action index. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._index = None else: try: value = int(value) except TypeError: raise ValueError("Action index value must be an integer >= 0.") if value < 0: raise ValueError("Action index value must be an integer >= 0.") self._index = value def _getIndex(self): """ Property target used to get the action index. """ return self._index def _setDependencies(self, value): """ Property target used to set the action dependencies information. If not C{None}, the value must be a C{ActionDependecies} object. @raise ValueError: If the value is not a C{ActionDependencies} object. """ if value is None: self._dependencies = None else: if not isinstance(value, ActionDependencies): raise ValueError("Value must be a C{ActionDependencies} object.") self._dependencies = value def _getDependencies(self): """ Property target used to get action dependencies information. """ return self._dependencies name = property(_getName, _setName, None, "Name of the extended action.") module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") function = property(_getFunction, _setFunction, None, "Name of the extended action function.") index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.") ######################################################################## # CommandOverride class definition ######################################################################## class CommandOverride(object): """ Class representing a piece of Cedar Backup command override configuration. The following restrictions exist on data in this class: - The absolute path must be absolute @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, command, absolutePath """ def __init__(self, command=None, absolutePath=None): """ Constructor for the C{CommandOverride} class. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. @raise ValueError: If one of the values is invalid. """ self._command = None self._absolutePath = None self.command = command self.absolutePath = absolutePath def __repr__(self): """ Official string representation for class instance. """ return "CommandOverride(%s, %s)" % (self.command, self.absolutePath) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.command != other.command: if self.command < other.command: return -1 else: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 return 0 def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.") ######################################################################## # CollectFile class definition ######################################################################## class CollectFile(object): """ Class representing a Cedar Backup collect file. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None): """ Constructor for the C{CollectFile} class. @param absolutePath: Absolute path of the file to collect. @param collectMode: Overridden collect mode for this file. @param archiveMode: Overridden archive mode for this file. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode def __repr__(self): """ Official string representation for class instance. """ return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.") ######################################################################## # CollectDir class definition ######################################################################## class CollectDir(object): """ Class representing a Cedar Backup collect directory. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None): """ Constructor for the C{CollectDir} class. @param absolutePath: Absolute path of the directory to collect. @param collectMode: Overridden collect mode for this directory. @param archiveMode: Overridden archive mode for this directory. @param ignoreFile: Overidden ignore file name for this directory. @param linkDepth: Maximum at which soft links should be followed. @param dereference: Whether to dereference links that are followed. @param absoluteExcludePaths: List of absolute paths to exclude. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._linkDepth = None self._dereference = None self._recursionLevel = None self._absoluteExcludePaths = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.linkDepth = linkDepth self.dereference = dereference self.recursionLevel = recursionLevel self.absoluteExcludePaths = absoluteExcludePaths self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.relativeExcludePaths, self.excludePatterns, self.linkDepth, self.dereference, self.recursionLevel) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if self.ignoreFile < other.ignoreFile: return -1 else: return 1 if self.linkDepth != other.linkDepth: if self.linkDepth < other.linkDepth: return -1 else: return 1 if self.dereference != other.dereference: if self.dereference < other.dereference: return -1 else: return 1 if self.recursionLevel != other.recursionLevel: if self.recursionLevel < other.recursionLevel: return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setLinkDepth(self, value): """ Property target used to set the link depth. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._linkDepth = None else: try: value = int(value) except TypeError: raise ValueError("Link depth value must be an integer >= 0.") if value < 0: raise ValueError("Link depth value must be an integer >= 0.") self._linkDepth = value def _getLinkDepth(self): """ Property target used to get the action linkDepth. """ return self._linkDepth def _setDereference(self, value): """ Property target used to set the dereference flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._dereference = True else: self._dereference = False def _getDereference(self): """ Property target used to get the dereference flag. """ return self._dereference def _setRecursionLevel(self, value): """ Property target used to set the recursionLevel. The value must be an integer. @raise ValueError: If the value is not valid. """ if value is None: self._recursionLevel = None else: try: value = int(value) except TypeError: raise ValueError("Recusion level value must be an integer.") self._recursionLevel = value def _getRecursionLevel(self): """ Property target used to get the action recursionLevel. """ return self._recursionLevel def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception, e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # PurgeDir class definition ######################################################################## class PurgeDir(object): """ Class representing a Cedar Backup purge directory. The following restrictions exist on data in this class: - The absolute path must be an absolute path - The retain days value must be an integer >= 0. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, retainDays """ def __init__(self, absolutePath=None, retainDays=None): """ Constructor for the C{PurgeDir} class. @param absolutePath: Absolute path of the directory to be purged. @param retainDays: Number of days content within directory should be retained. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._retainDays = None self.absolutePath = absolutePath self.retainDays = retainDays def __repr__(self): """ Official string representation for class instance. """ return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.retainDays != other.retainDays: if self.retainDays < other.retainDays: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must, er, be an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setRetainDays(self, value): """ Property target used to set the retain days value. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._retainDays = None else: try: value = int(value) except TypeError: raise ValueError("Retain days value must be an integer >= 0.") if value < 0: raise ValueError("Retain days value must be an integer >= 0.") self._retainDays = value def _getRetainDays(self): """ Property target used to get the absolute path. """ return self._retainDays absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.") ######################################################################## # LocalPeer class definition ######################################################################## class LocalPeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, name, collectDir """ def __init__(self, name=None, collectDir=None, ignoreFailureMode=None): """ Constructor for the C{LocalPeer} class. @param name: Name of the peer, typically a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.collectDir != other.collectDir: if self.collectDir < other.collectDir: return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if self.ignoreFailureMode < other.ignoreFailureMode: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # RemotePeer class definition ######################################################################## class RemotePeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The remote user must be a non-empty string. - The rcp command must be a non-empty string. - The rsh command must be a non-empty string. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, name, collectDir, remoteUser, rcpCommand """ def __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None): """ Constructor for the C{RemotePeer} class. @param name: Name of the peer, must be a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param remoteUser: Name of backup user on remote peer. @param rcpCommand: Overridden rcp-compatible copy command for peer. @param rshCommand: Overridden rsh-compatible remote shell command for peer. @param cbackCommand: Overridden cback-compatible command to use on remote peer. @param managed: Indicates whether this is a managed peer. @param managedActions: Overridden set of actions that are managed on the peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._remoteUser = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._managed = None self._managedActions = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.remoteUser = remoteUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.managed = managed self.managedActions = managedActions self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, self.rcpCommand, self.rshCommand, self.cbackCommand, self.managed, self.managedActions, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.collectDir != other.collectDir: if self.collectDir < other.collectDir: return -1 else: return 1 if self.remoteUser != other.remoteUser: if self.remoteUser < other.remoteUser: return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if self.rcpCommand < other.rcpCommand: return -1 else: return 1 if self.rshCommand != other.rshCommand: if self.rshCommand < other.rshCommand: return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if self.cbackCommand < other.cbackCommand: return -1 else: return 1 if self.managed != other.managed: if self.managed < other.managed: return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if self.ignoreFailureMode < other.ignoreFailureMode: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception, e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # ReferenceConfig class definition ######################################################################## class ReferenceConfig(object): """ Class representing a Cedar Backup reference configuration. The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x. @sort: __init__, __repr__, __str__, __cmp__, author, revision, description, generator """ def __init__(self, author=None, revision=None, description=None, generator=None): """ Constructor for the C{ReferenceConfig} class. @param author: Author of the configuration file. @param revision: Revision of the configuration file. @param description: Description of the configuration file. @param generator: Tool that generated the configuration file. """ self._author = None self._revision = None self._description = None self._generator = None self.author = author self.revision = revision self.description = description self.generator = generator def __repr__(self): """ Official string representation for class instance. """ return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.author != other.author: if self.author < other.author: return -1 else: return 1 if self.revision != other.revision: if self.revision < other.revision: return -1 else: return 1 if self.description != other.description: if self.description < other.description: return -1 else: return 1 if self.generator != other.generator: if self.generator < other.generator: return -1 else: return 1 return 0 def _setAuthor(self, value): """ Property target used to set the author value. No validations. """ self._author = value def _getAuthor(self): """ Property target used to get the author value. """ return self._author def _setRevision(self, value): """ Property target used to set the revision value. No validations. """ self._revision = value def _getRevision(self): """ Property target used to get the revision value. """ return self._revision def _setDescription(self, value): """ Property target used to set the description value. No validations. """ self._description = value def _getDescription(self): """ Property target used to get the description value. """ return self._description def _setGenerator(self, value): """ Property target used to set the generator value. No validations. """ self._generator = value def _getGenerator(self): """ Property target used to get the generator value. """ return self._generator author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") description = property(_getDescription, _setDescription, None, "Description of the configuration file.") generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.") ######################################################################## # ExtensionsConfig class definition ######################################################################## class ExtensionsConfig(object): """ Class representing Cedar Backup extensions configuration. Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function. The following restrictions exist on data in this class: - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} - The actions list must be a list of C{ExtendedAction} objects. @sort: __init__, __repr__, __str__, __cmp__, orderMode, actions """ def __init__(self, actions=None, orderMode=None): """ Constructor for the C{ExtensionsConfig} class. @param actions: List of extended actions """ self._orderMode = None self._actions = None self.orderMode = orderMode self.actions = actions def __repr__(self): """ Official string representation for class instance. """ return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.orderMode != other.orderMode: if self.orderMode < other.orderMode: return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 def _setOrderMode(self, value): """ Property target used to set the order mode. The value must be one of L{VALID_ORDER_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ORDER_MODES: raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) self._orderMode = value def _getOrderMode(self): """ Property target used to get the order mode. """ return self._orderMode def _setActions(self, value): """ Property target used to set the actions list. Either the value must be C{None} or each element must be an C{ExtendedAction}. @raise ValueError: If the value is not a C{ExtendedAction} """ if value is None: self._actions = None else: try: saved = self._actions self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") self._actions.extend(value) except Exception, e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") actions = property(_getActions, _setActions, None, "List of extended actions.") ######################################################################## # OptionsConfig class definition ######################################################################## class OptionsConfig(object): """ Class representing a Cedar Backup global options configuration. The options section is used to store global configuration options and defaults that can be applied to other sections. The following restrictions exist on data in this class: - The working directory must be an absolute path. - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. - All of the other values must be non-empty strings if they are set to something other than C{None}. - The overrides list must be a list of C{CommandOverride} objects. - The hooks list must be a list of C{ActionHook} objects. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, startingDay, workingDir, backupUser, backupGroup, rcpCommand, rshCommand, overrides """ def __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None): """ Constructor for the C{OptionsConfig} class. @param startingDay: Day that starts the week. @param workingDir: Working (temporary) directory to use for backups. @param backupUser: Effective user that backups should run as. @param backupGroup: Effective group that backups should run as. @param rcpCommand: Default rcp-compatible copy command for staging. @param rshCommand: Default rsh-compatible command to use for remote shells. @param cbackCommand: Default cback-compatible command to use on managed remote peers. @param overrides: List of configured command path overrides, if any. @param hooks: List of configured pre- and post-action hooks. @param managedActions: Default set of actions that are managed on remote peers. @raise ValueError: If one of the values is invalid. """ self._startingDay = None self._workingDir = None self._backupUser = None self._backupGroup = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._overrides = None self._hooks = None self._managedActions = None self.startingDay = startingDay self.workingDir = workingDir self.backupUser = backupUser self.backupGroup = backupGroup self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.overrides = overrides self.hooks = hooks self.managedActions = managedActions def __repr__(self): """ Official string representation for class instance. """ return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, self.backupUser, self.backupGroup, self.rcpCommand, self.overrides, self.hooks, self.rshCommand, self.cbackCommand, self.managedActions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.startingDay != other.startingDay: if self.startingDay < other.startingDay: return -1 else: return 1 if self.workingDir != other.workingDir: if self.workingDir < other.workingDir: return -1 else: return 1 if self.backupUser != other.backupUser: if self.backupUser < other.backupUser: return -1 else: return 1 if self.backupGroup != other.backupGroup: if self.backupGroup < other.backupGroup: return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if self.rcpCommand < other.rcpCommand: return -1 else: return 1 if self.rshCommand != other.rshCommand: if self.rshCommand < other.rshCommand: return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if self.cbackCommand < other.cbackCommand: return -1 else: return 1 if self.overrides != other.overrides: if self.overrides < other.overrides: return -1 else: return 1 if self.hooks != other.hooks: if self.hooks < other.hooks: return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 return 0 def addOverride(self, command, absolutePath): """ If no override currently exists for the command, add one. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True break if not exists: self.overrides.append(override) def replaceOverride(self, command, absolutePath): """ If override currently exists for the command, replace it; otherwise add it. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True obj.absolutePath = override.absolutePath break if not exists: self.overrides.append(override) def _setStartingDay(self, value): """ Property target used to set the starting day. If it is not C{None}, the value must be a valid English day of the week, one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. @raise ValueError: If the value is not a valid day of the week. """ if value is not None: if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") self._startingDay = value def _getStartingDay(self): """ Property target used to get the starting day. """ return self._startingDay def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setBackupUser(self, value): """ Property target used to set the backup user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup user must be a non-empty string.") self._backupUser = value def _getBackupUser(self): """ Property target used to get the backup user. """ return self._backupUser def _setBackupGroup(self, value): """ Property target used to set the backup group. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup group must be a non-empty string.") self._backupGroup = value def _getBackupGroup(self): """ Property target used to get the backup group. """ return self._backupGroup def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setOverrides(self, value): """ Property target used to set the command path overrides list. Either the value must be C{None} or each element must be a C{CommandOverride}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._overrides = None else: try: saved = self._overrides self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") self._overrides.extend(value) except Exception, e: self._overrides = saved raise e def _getOverrides(self): """ Property target used to get the command path overrides list. """ return self._overrides def _setHooks(self, value): """ Property target used to set the pre- and post-action hooks list. Either the value must be C{None} or each element must be an C{ActionHook}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._hooks = None else: try: saved = self._hooks self._hooks = ObjectTypeList(ActionHook, "ActionHook") self._hooks.extend(value) except Exception, e: self._hooks = saved raise e def _getHooks(self): """ Property target used to get the command path hooks list. """ return self._hooks def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception, e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.") ######################################################################## # PeersConfig class definition ######################################################################## class PeersConfig(object): """ Class representing Cedar Backup global peer configuration. This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration. The following restrictions exist on data in this class: - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, localPeers, remotePeers """ def __init__(self, localPeers=None, remotePeers=None): """ Constructor for the C{PeersConfig} class. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._localPeers = None self._remotePeers = None self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception, e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception, e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # CollectConfig class definition ######################################################################## class CollectConfig(object): """ Class representing a Cedar Backup collect configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. - Each of the paths in C{absoluteExcludePaths} must be an absolute path - The collect file list must be a list of C{CollectFile} objects. - The collect directory list must be a list of C{CollectDir} objects. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. For the C{collectFiles} and C{collectDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has an appropriate type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, targetDir, collectMode, archiveMode, ignoreFile, absoluteExcludePaths, excludePatterns, collectFiles, collectDirs """ def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None): """ Constructor for the C{CollectConfig} class. @param targetDir: Directory to collect files into. @param collectMode: Default collect mode. @param archiveMode: Default archive mode for collect files. @param ignoreFile: Default ignore file name. @param absoluteExcludePaths: List of absolute paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @param collectFiles: List of collect files. @param collectDirs: List of collect directories. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._absoluteExcludePaths = None self._excludePatterns = None self._collectFiles = None self._collectDirs = None self.targetDir = targetDir self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.absoluteExcludePaths = absoluteExcludePaths self.excludePatterns = excludePatterns self.collectFiles = collectFiles self.collectDirs = collectDirs def __repr__(self): """ Official string representation for class instance. """ return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.excludePatterns, self.collectFiles, self.collectDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if self.targetDir < other.targetDir: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if self.ignoreFile < other.ignoreFile: return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 if self.collectFiles != other.collectFiles: if self.collectFiles < other.collectFiles: return -1 else: return 1 if self.collectDirs != other.collectDirs: if self.collectDirs < other.collectDirs: return -1 else: return 1 return 0 def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = encodePath(value) def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception, e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setCollectFiles(self, value): """ Property target used to set the collect files list. Either the value must be C{None} or each element must be a C{CollectFile}. @raise ValueError: If the value is not a C{CollectFile} """ if value is None: self._collectFiles = None else: try: saved = self._collectFiles self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") self._collectFiles.extend(value) except Exception, e: self._collectFiles = saved raise e def _getCollectFiles(self): """ Property target used to get the collect files list. """ return self._collectFiles def _setCollectDirs(self, value): """ Property target used to set the collect dirs list. Either the value must be C{None} or each element must be a C{CollectDir}. @raise ValueError: If the value is not a C{CollectDir} """ if value is None: self._collectDirs = None else: try: saved = self._collectDirs self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") self._collectDirs.extend(value) except Exception, e: self._collectDirs = saved raise e def _getCollectDirs(self): """ Property target used to get the collect dirs list. """ return self._collectDirs targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.") ######################################################################## # StageConfig class definition ######################################################################## class StageConfig(object): """ Class representing a Cedar Backup stage configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, targetDir, localPeers, remotePeers """ def __init__(self, targetDir=None, localPeers=None, remotePeers=None): """ Constructor for the C{StageConfig} class. @param targetDir: Directory to stage files into, by peer name. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._localPeers = None self._remotePeers = None self.targetDir = targetDir self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if self.targetDir < other.targetDir: return -1 else: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception, e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception, e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # StoreConfig class definition ######################################################################## class StoreConfig(object): """ Class representing a Cedar Backup store configuration. The following restrictions exist on data in this class: - The source directory must be an absolute path. - The media type must be one of the values in L{VALID_MEDIA_TYPES}. - The device type must be one of the values in L{VALID_DEVICE_TYPES}. - The device path must be an absolute path. - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. - The drive speed must be an integer >= 1 - The blanking behavior must be a C{BlankBehavior} object - The refresh media delay must be an integer >= 0 - The eject delay must be an integer >= 0 Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration. @sort: __init__, __repr__, __str__, __cmp__, sourceDir, mediaType, deviceType, devicePath, deviceScsiId, driveSpeed, checkData, checkMedia, warnMidnite, noEject, blankBehavior, refreshMediaDelay, ejectDelay """ def __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None): """ Constructor for the C{StoreConfig} class. @param sourceDir: Directory whose contents should be written to media. @param mediaType: Type of the media (see notes above). @param deviceType: Type of the device (optional, see notes above). @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. @param deviceScsiId: SCSI id for writer device, i.e. C{[:]scsibus,target,lun}. @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. @param checkData: Whether resulting image should be validated. @param checkMedia: Whether media should be checked before being written to. @param warnMidnite: Whether to generate warnings for crossing midnite. @param noEject: Indicates that the writer device should not be ejected. @param blankBehavior: Controls optimized blanking behavior. @param refreshMediaDelay: Delay, in seconds, to add after refreshing media @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray @raise ValueError: If one of the values is invalid. """ self._sourceDir = None self._mediaType = None self._deviceType = None self._devicePath = None self._deviceScsiId = None self._driveSpeed = None self._checkData = None self._checkMedia = None self._warnMidnite = None self._noEject = None self._blankBehavior = None self._refreshMediaDelay = None self._ejectDelay = None self.sourceDir = sourceDir self.mediaType = mediaType self.deviceType = deviceType self.devicePath = devicePath self.deviceScsiId = deviceScsiId self.driveSpeed = driveSpeed self.checkData = checkData self.checkMedia = checkMedia self.warnMidnite = warnMidnite self.noEject = noEject self.blankBehavior = blankBehavior self.refreshMediaDelay = refreshMediaDelay self.ejectDelay = ejectDelay def __repr__(self): """ Official string representation for class instance. """ return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( self.sourceDir, self.mediaType, self.deviceType, self.devicePath, self.deviceScsiId, self.driveSpeed, self.checkData, self.warnMidnite, self.noEject, self.checkMedia, self.blankBehavior, self.refreshMediaDelay, self.ejectDelay) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sourceDir != other.sourceDir: if self.sourceDir < other.sourceDir: return -1 else: return 1 if self.mediaType != other.mediaType: if self.mediaType < other.mediaType: return -1 else: return 1 if self.deviceType != other.deviceType: if self.deviceType < other.deviceType: return -1 else: return 1 if self.devicePath != other.devicePath: if self.devicePath < other.devicePath: return -1 else: return 1 if self.deviceScsiId != other.deviceScsiId: if self.deviceScsiId < other.deviceScsiId: return -1 else: return 1 if self.driveSpeed != other.driveSpeed: if self.driveSpeed < other.driveSpeed: return -1 else: return 1 if self.checkData != other.checkData: if self.checkData < other.checkData: return -1 else: return 1 if self.checkMedia != other.checkMedia: if self.checkMedia < other.checkMedia: return -1 else: return 1 if self.warnMidnite != other.warnMidnite: if self.warnMidnite < other.warnMidnite: return -1 else: return 1 if self.noEject != other.noEject: if self.noEject < other.noEject: return -1 else: return 1 if self.blankBehavior != other.blankBehavior: if self.blankBehavior < other.blankBehavior: return -1 else: return 1 if self.refreshMediaDelay != other.refreshMediaDelay: if self.refreshMediaDelay < other.refreshMediaDelay: return -1 else: return 1 if self.ejectDelay != other.ejectDelay: if self.ejectDelay < other.ejectDelay: return -1 else: return 1 return 0 def _setSourceDir(self, value): """ Property target used to set the source directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Source directory must be an absolute path.") self._sourceDir = encodePath(value) def _getSourceDir(self): """ Property target used to get the source directory. """ return self._sourceDir def _setMediaType(self, value): """ Property target used to set the media type. The value must be one of L{VALID_MEDIA_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_MEDIA_TYPES: raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) self._mediaType = value def _getMediaType(self): """ Property target used to get the media type. """ return self._mediaType def _setDeviceType(self, value): """ Property target used to set the device type. The value must be one of L{VALID_DEVICE_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_DEVICE_TYPES: raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) self._deviceType = value def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _setDevicePath(self, value): """ Property target used to set the device path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Device path must be an absolute path.") self._devicePath = encodePath(value) def _getDevicePath(self): """ Property target used to get the device path. """ return self._devicePath def _setDeviceScsiId(self, value): """ Property target used to set the SCSI id The SCSI id must be valid per L{validateScsiId}. @raise ValueError: If the value is not valid. """ if value is None: self._deviceScsiId = None else: self._deviceScsiId = validateScsiId(value) def _getDeviceScsiId(self): """ Property target used to get the SCSI id. """ return self._deviceScsiId def _setDriveSpeed(self, value): """ Property target used to set the drive speed. The drive speed must be valid per L{validateDriveSpeed}. @raise ValueError: If the value is not valid. """ self._driveSpeed = validateDriveSpeed(value) def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _setCheckData(self, value): """ Property target used to set the check data flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkData = True else: self._checkData = False def _getCheckData(self): """ Property target used to get the check data flag. """ return self._checkData def _setCheckMedia(self, value): """ Property target used to set the check media flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkMedia = True else: self._checkMedia = False def _getCheckMedia(self): """ Property target used to get the check media flag. """ return self._checkMedia def _setWarnMidnite(self, value): """ Property target used to set the midnite warning flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._warnMidnite = True else: self._warnMidnite = False def _getWarnMidnite(self): """ Property target used to get the midnite warning flag. """ return self._warnMidnite def _setNoEject(self, value): """ Property target used to set the no-eject flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._noEject = True else: self._noEject = False def _getNoEject(self): """ Property target used to get the no-eject flag. """ return self._noEject def _setBlankBehavior(self, value): """ Property target used to set blanking behavior configuration. If not C{None}, the value must be a C{BlankBehavior} object. @raise ValueError: If the value is not a C{BlankBehavior} """ if value is None: self._blankBehavior = None else: if not isinstance(value, BlankBehavior): raise ValueError("Value must be a C{BlankBehavior} object.") self._blankBehavior = value def _getBlankBehavior(self): """ Property target used to get the blanking behavior configuration. """ return self._blankBehavior def _setRefreshMediaDelay(self, value): """ Property target used to set the refreshMediaDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._refreshMediaDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._refreshMediaDelay = value def _getRefreshMediaDelay(self): """ Property target used to get the action refreshMediaDelay. """ return self._refreshMediaDelay def _setEjectDelay(self, value): """ Property target used to set the ejectDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._ejectDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._ejectDelay = value def _getEjectDelay(self): """ Property target used to get the action ejectDelay. """ return self._ejectDelay sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray") ######################################################################## # PurgeConfig class definition ######################################################################## class PurgeConfig(object): """ Class representing a Cedar Backup purge configuration. The following restrictions exist on data in this class: - The purge directory list must be a list of C{PurgeDir} objects. For the C{purgeDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is a C{PurgeDir}. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, purgeDirs """ def __init__(self, purgeDirs=None): """ Constructor for the C{Purge} class. @param purgeDirs: List of purge directories. @raise ValueError: If one of the values is invalid. """ self._purgeDirs = None self.purgeDirs = purgeDirs def __repr__(self): """ Official string representation for class instance. """ return "PurgeConfig(%s)" % self.purgeDirs def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.purgeDirs != other.purgeDirs: if self.purgeDirs < other.purgeDirs: return -1 else: return 1 return 0 def _setPurgeDirs(self, value): """ Property target used to set the purge dirs list. Either the value must be C{None} or each element must be a C{PurgeDir}. @raise ValueError: If the value is not a C{PurgeDir} """ if value is None: self._purgeDirs = None else: try: saved = self._purgeDirs self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") self._purgeDirs.extend(value) except Exception, e: self._purgeDirs = saved raise e def _getPurgeDirs(self): """ Property target used to get the purge dirs list. """ return self._purgeDirs purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.") ######################################################################## # Config class definition ######################################################################## class Config(object): ###################### # Class documentation ###################### """ Class representing a Cedar Backup XML configuration document. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications. The object representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. This class and the classes it is composed from often use Python's C{property} construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details). Assignments to the various instance variables must match the expected type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check uses the built-in C{isinstance} function, so it should be OK to use subclasses if you want to. If an instance variable is not set, its value will be C{None}. When an object is initialized without using an XML document, all of the values will be C{None}. Even when an object is initialized using XML, some of the values might be C{None} because not every section is required. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, extractXml, validate, reference, extensions, options, collect, stage, store, purge, _getReference, _setReference, _getExtensions, _setExtensions, _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, _setCollect, _getStage, _setStage, _getStore, _setStore, _getPurge, _setPurge """ ############## # Constructor ############## def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath}, then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{Config.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._reference = None self._extensions = None self._options = None self._peers = None self._collect = None self._stage = None self._store = None self._purge = None self.reference = None self.extensions = None self.options = None self.peers = None self.collect = None self.stage = None self.store = None self.purge = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, self.peers, self.collect, self.stage, self.store, self.purge) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.reference != other.reference: if self.reference < other.reference: return -1 else: return 1 if self.extensions != other.extensions: if self.extensions < other.extensions: return -1 else: return 1 if self.options != other.options: if self.options < other.options: return -1 else: return 1 if self.peers != other.peers: if self.peers < other.peers: return -1 else: return 1 if self.collect != other.collect: if self.collect < other.collect: return -1 else: return 1 if self.stage != other.stage: if self.stage < other.stage: return -1 else: return 1 if self.store != other.store: if self.store < other.store: return -1 else: return 1 if self.purge != other.purge: if self.purge < other.purge: return -1 else: return 1 return 0 ############# # Properties ############# def _setReference(self, value): """ Property target used to set the reference configuration value. If not C{None}, the value must be a C{ReferenceConfig} object. @raise ValueError: If the value is not a C{ReferenceConfig} """ if value is None: self._reference = None else: if not isinstance(value, ReferenceConfig): raise ValueError("Value must be a C{ReferenceConfig} object.") self._reference = value def _getReference(self): """ Property target used to get the reference configuration value. """ return self._reference def _setExtensions(self, value): """ Property target used to set the extensions configuration value. If not C{None}, the value must be a C{ExtensionsConfig} object. @raise ValueError: If the value is not a C{ExtensionsConfig} """ if value is None: self._extensions = None else: if not isinstance(value, ExtensionsConfig): raise ValueError("Value must be a C{ExtensionsConfig} object.") self._extensions = value def _getExtensions(self): """ Property target used to get the extensions configuration value. """ return self._extensions def _setOptions(self, value): """ Property target used to set the options configuration value. If not C{None}, the value must be an C{OptionsConfig} object. @raise ValueError: If the value is not a C{OptionsConfig} """ if value is None: self._options = None else: if not isinstance(value, OptionsConfig): raise ValueError("Value must be a C{OptionsConfig} object.") self._options = value def _getOptions(self): """ Property target used to get the options configuration value. """ return self._options def _setPeers(self, value): """ Property target used to set the peers configuration value. If not C{None}, the value must be an C{PeersConfig} object. @raise ValueError: If the value is not a C{PeersConfig} """ if value is None: self._peers = None else: if not isinstance(value, PeersConfig): raise ValueError("Value must be a C{PeersConfig} object.") self._peers = value def _getPeers(self): """ Property target used to get the peers configuration value. """ return self._peers def _setCollect(self, value): """ Property target used to set the collect configuration value. If not C{None}, the value must be a C{CollectConfig} object. @raise ValueError: If the value is not a C{CollectConfig} """ if value is None: self._collect = None else: if not isinstance(value, CollectConfig): raise ValueError("Value must be a C{CollectConfig} object.") self._collect = value def _getCollect(self): """ Property target used to get the collect configuration value. """ return self._collect def _setStage(self, value): """ Property target used to set the stage configuration value. If not C{None}, the value must be a C{StageConfig} object. @raise ValueError: If the value is not a C{StageConfig} """ if value is None: self._stage = None else: if not isinstance(value, StageConfig): raise ValueError("Value must be a C{StageConfig} object.") self._stage = value def _getStage(self): """ Property target used to get the stage configuration value. """ return self._stage def _setStore(self, value): """ Property target used to set the store configuration value. If not C{None}, the value must be a C{StoreConfig} object. @raise ValueError: If the value is not a C{StoreConfig} """ if value is None: self._store = None else: if not isinstance(value, StoreConfig): raise ValueError("Value must be a C{StoreConfig} object.") self._store = value def _getStore(self): """ Property target used to get the store configuration value. """ return self._store def _setPurge(self, value): """ Property target used to set the purge configuration value. If not C{None}, the value must be a C{PurgeConfig} object. @raise ValueError: If the value is not a C{PurgeConfig} """ if value is None: self._purge = None else: if not isinstance(value, PurgeConfig): raise ValueError("Value must be a C{PurgeConfig} object.") self._purge = value def _getPurge(self): """ Property target used to get the purge configuration value. """ return self._purge reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") ################# # Public methods ################# def extractXml(self, xmlPath=None, validate=True): """ Extracts configuration into an XML document. If C{xmlPath} is not provided, then the XML document will be returned as a string. If C{xmlPath} is provided, then the XML document will be written to the file and C{None} will be returned. Unless the C{validate} parameter is C{False}, the L{Config.validate} method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to write an invalid configuration file to disk. @param xmlPath: Path to an XML file to create on disk. @type xmlPath: Absolute path to a file. @param validate: Validate the document before extracting it. @type validate: Boolean true/false. @return: XML string data or C{None} as described above. @raise ValueError: If configuration within the object is not valid. @raise IOError: If there is an error writing to the file. @raise OSError: If there is an error writing to the file. """ if validate: self.validate() xmlData = self._extractXml() if xmlPath is not None: open(xmlPath, "w").write(xmlData) return None else: return xmlData def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False): """ Validates configuration represented by the object. This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the I{Validation} section in the class notes (above). @param requireOneAction: Require at least one of the collect, stage, store or purge sections. @param requireReference: Require the reference section. @param requireExtensions: Require the extensions section. @param requireOptions: Require the options section. @param requirePeers: Require the peers section. @param requireCollect: Require the collect section. @param requireStage: Require the stage section. @param requireStore: Require the store section. @param requirePurge: Require the purge section. @raise ValueError: If one of the validations fails. """ if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): raise ValueError("At least one of the collect, stage, store and purge sections is required.") if requireReference and self.reference is None: raise ValueError("The reference is section is required.") if requireExtensions and self.extensions is None: raise ValueError("The extensions is section is required.") if requireOptions and self.options is None: raise ValueError("The options is section is required.") if requirePeers and self.peers is None: raise ValueError("The peers is section is required.") if requireCollect and self.collect is None: raise ValueError("The collect is section is required.") if requireStage and self.stage is None: raise ValueError("The stage is section is required.") if requireStore and self.store is None: raise ValueError("The store is section is required.") if requirePurge and self.purge is None: raise ValueError("The purge is section is required.") self._validateContents() ##################################### # High-level methods for parsing XML ##################################### def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls individual static methods to parse each of the individual configuration sections. Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the L{validate} method). @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._reference = Config._parseReference(parentNode) self._extensions = Config._parseExtensions(parentNode) self._options = Config._parseOptions(parentNode) self._peers = Config._parsePeers(parentNode) self._collect = Config._parseCollect(parentNode) self._stage = Config._parseStage(parentNode) self._store = Config._parseStore(parentNode) self._purge = Config._parsePurge(parentNode) @staticmethod def _parseReference(parentNode): """ Parses a reference configuration section. We read the following fields:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator @param parentNode: Parent node to search beneath. @return: C{ReferenceConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ reference = None sectionNode = readFirstChild(parentNode, "reference") if sectionNode is not None: reference = ReferenceConfig() reference.author = readString(sectionNode, "author") reference.revision = readString(sectionNode, "revision") reference.description = readString(sectionNode, "description") reference.generator = readString(sectionNode, "generator") return reference @staticmethod def _parseExtensions(parentNode): """ Parses an extensions configuration section. We read the following fields:: orderMode //cb_config/extensions/order_mode We also read groups of the following items, one list element per item:: name //cb_config/extensions/action/name module //cb_config/extensions/action/module function //cb_config/extensions/action/function index //cb_config/extensions/action/index dependencies //cb_config/extensions/action/depends The extended actions are parsed by L{_parseExtendedActions}. @param parentNode: Parent node to search beneath. @return: C{ExtensionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ extensions = None sectionNode = readFirstChild(parentNode, "extensions") if sectionNode is not None: extensions = ExtensionsConfig() extensions.orderMode = readString(sectionNode, "order_mode") extensions.actions = Config._parseExtendedActions(sectionNode) return extensions @staticmethod def _parseOptions(parentNode): """ Parses a options configuration section. We read the following fields:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions The list of managed actions is a comma-separated list of action names. We also read groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/hook The overrides are parsed by L{_parseOverrides} and the hooks are parsed by L{_parseHooks}. @param parentNode: Parent node to search beneath. @return: C{OptionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ options = None sectionNode = readFirstChild(parentNode, "options") if sectionNode is not None: options = OptionsConfig() options.startingDay = readString(sectionNode, "starting_day") options.workingDir = readString(sectionNode, "working_dir") options.backupUser = readString(sectionNode, "backup_user") options.backupGroup = readString(sectionNode, "backup_group") options.rcpCommand = readString(sectionNode, "rcp_command") options.rshCommand = readString(sectionNode, "rsh_command") options.cbackCommand = readString(sectionNode, "cback_command") options.overrides = Config._parseOverrides(sectionNode) options.hooks = Config._parseHooks(sectionNode) managedActions = readString(sectionNode, "managed_actions") options.managedActions = parseCommaSeparatedString(managedActions) return options @staticmethod def _parsePeers(parentNode): """ Parses a peers configuration section. We read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ peers = None sectionNode = readFirstChild(parentNode, "peers") if sectionNode is not None: peers = PeersConfig() (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) return peers @staticmethod def _parseCollect(parentNode): """ Parses a collect configuration section. We read the following individual fields:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also read groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The exclusions are parsed by L{_parseExclusions}, the collect files are parsed by L{_parseCollectFiles}, and the directories are parsed by L{_parseCollectDirs}. @param parentNode: Parent node to search beneath. @return: C{CollectConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ collect = None sectionNode = readFirstChild(parentNode, "collect") if sectionNode is not None: collect = CollectConfig() collect.targetDir = readString(sectionNode, "collect_dir") collect.collectMode = readString(sectionNode, "collect_mode") collect.archiveMode = readString(sectionNode, "archive_mode") collect.ignoreFile = readString(sectionNode, "ignore_file") (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) collect.collectFiles = Config._parseCollectFiles(sectionNode) collect.collectDirs = Config._parseCollectDirs(sectionNode) return collect @staticmethod def _parseStage(parentNode): """ Parses a stage configuration section. We read the following individual fields:: targetDir //cb_config/stage/staging_dir We also read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ stage = None sectionNode = readFirstChild(parentNode, "stage") if sectionNode is not None: stage = StageConfig() stage.targetDir = readString(sectionNode, "staging_dir") (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) return stage @staticmethod def _parseStore(parentNode): """ Parses a store configuration section. We read the following fields:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject Blanking behavior configuration is parsed by the C{_parseBlankBehavior} method. @param parentNode: Parent node to search beneath. @return: C{StoreConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ store = None sectionNode = readFirstChild(parentNode, "store") if sectionNode is not None: store = StoreConfig() store.sourceDir = readString(sectionNode, "source_dir") store.mediaType = readString(sectionNode, "media_type") store.deviceType = readString(sectionNode, "device_type") store.devicePath = readString(sectionNode, "target_device") store.deviceScsiId = readString(sectionNode, "target_scsi_id") store.driveSpeed = readInteger(sectionNode, "drive_speed") store.checkData = readBoolean(sectionNode, "check_data") store.checkMedia = readBoolean(sectionNode, "check_media") store.warnMidnite = readBoolean(sectionNode, "warn_midnite") store.noEject = readBoolean(sectionNode, "no_eject") store.blankBehavior = Config._parseBlankBehavior(sectionNode) store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") store.ejectDelay = readInteger(sectionNode, "eject_delay") return store @staticmethod def _parsePurge(parentNode): """ Parses a purge configuration section. We read groups of the following items, one list element per item:: purgeDirs //cb_config/purge/dir The individual directory entries are parsed by L{_parsePurgeDirs}. @param parentNode: Parent node to search beneath. @return: C{PurgeConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ purge = None sectionNode = readFirstChild(parentNode, "purge") if sectionNode is not None: purge = PurgeConfig() purge.purgeDirs = Config._parsePurgeDirs(sectionNode) return purge @staticmethod def _parseExtendedActions(parentNode): """ Reads extended actions data from immediately beneath the parent. We read the following individual fields from each extended action:: name name module module function function index index dependencies depends Dependency information is parsed by the C{_parseDependencies} method. @param parentNode: Parent node to search beneath. @return: List of extended actions. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "action"): if isElement(entry): action = ExtendedAction() action.name = readString(entry, "name") action.module = readString(entry, "module") action.function = readString(entry, "function") action.index = readInteger(entry, "index") action.dependencies = Config._parseDependencies(entry) lst.append(action) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: absolute exclude/abs_path relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration. @param parentNode: Parent node to search beneath. @return: Tuple of (absolute, relative, patterns) exclusions. """ sectionNode = readFirstChild(parentNode, "exclude") if sectionNode is None: return (None, None, None) else: absolute = readStringList(sectionNode, "abs_path") relative = readStringList(sectionNode, "rel_path") patterns = readStringList(sectionNode, "pattern") return (absolute, relative, patterns) @staticmethod def _parseOverrides(parentNode): """ Reads a list of C{CommandOverride} objects from immediately beneath the parent. We read the following individual fields:: command command absolutePath abs_path @param parentNode: Parent node to search beneath. @return: List of C{CommandOverride} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "override"): if isElement(entry): override = CommandOverride() override.command = readString(entry, "command") override.absolutePath = readString(entry, "abs_path") lst.append(override) if lst == []: lst = None return lst @staticmethod def _parseHooks(parentNode): """ Reads a list of C{ActionHook} objects from immediately beneath the parent. We read the following individual fields:: action action command command @param parentNode: Parent node to search beneath. @return: List of C{ActionHook} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "pre_action_hook"): if isElement(entry): hook = PreActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) for entry in readChildren(parentNode, "post_action_hook"): if isElement(entry): hook = PostActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) if lst == []: lst = None return lst @staticmethod def _parseCollectFiles(parentNode): """ Reads a list of C{CollectFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode The collect mode is a special case. Just a C{mode} tag is accepted, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. @param parentNode: Parent node to search beneath. @return: List of C{CollectFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "file"): if isElement(entry): cfile = CollectFile() cfile.absolutePath = readString(entry, "abs_path") cfile.collectMode = readString(entry, "mode") if cfile.collectMode is None: cfile.collectMode = readString(entry, "collect_mode") cfile.archiveMode = readString(entry, "archive_mode") lst.append(cfile) if lst == []: lst = None return lst @staticmethod def _parseCollectDirs(parentNode): """ Reads a list of C{CollectDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode ignoreFile ignore_file linkDepth link_depth dereference dereference recursionLevel recursion_level The collect mode is a special case. Just a C{mode} tag is accepted for backwards compatibility, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. We also read groups of the following items, one list element per item:: absoluteExcludePaths exclude/abs_path relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parentNode: Parent node to search beneath. @return: List of C{CollectDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = CollectDir() cdir.absolutePath = readString(entry, "abs_path") cdir.collectMode = readString(entry, "mode") if cdir.collectMode is None: cdir.collectMode = readString(entry, "collect_mode") cdir.archiveMode = readString(entry, "archive_mode") cdir.ignoreFile = readString(entry, "ignore_file") cdir.linkDepth = readInteger(entry, "link_depth") cdir.dereference = readBoolean(entry, "dereference") cdir.recursionLevel = readInteger(entry, "recursion_level") (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePurgeDirs(parentNode): """ Reads a list of C{PurgeDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath /abs_path retainDays /retain_days @param parentNode: Parent node to search beneath. @return: List of C{PurgeDir} objects or C{None} if none are found. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = PurgeDir() cdir.absolutePath = readString(entry, "abs_path") cdir.retainDays = readInteger(entry, "retain_days") lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePeerList(parentNode): """ Reads remote and local peer data from immediately beneath the parent. We read the following individual fields for both remote and local peers:: name name collectDir collect_dir We also read the following individual fields for remote peers only:: remoteUser backup_user rcpCommand rcp_command rshCommand rsh_command cbackCommand cback_command managed managed managedActions managed_actions Additionally, the value in the C{type} field is used to determine whether this entry is a remote peer. If the type is C{"remote"}, it's a remote peer, and if the type is C{"local"}, it's a remote peer. If there are none of one type of peer (i.e. no local peers) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (local, remote) peer lists. @raise ValueError: If the data at the location can't be read """ localPeers = [] remotePeers = [] for entry in readChildren(parentNode, "peer"): if isElement(entry): peerType = readString(entry, "type") if peerType == "local": localPeer = LocalPeer() localPeer.name = readString(entry, "name") localPeer.collectDir = readString(entry, "collect_dir") localPeer.ignoreFailureMode = readString(entry, "ignore_failures") localPeers.append(localPeer) elif peerType == "remote": remotePeer = RemotePeer() remotePeer.name = readString(entry, "name") remotePeer.collectDir = readString(entry, "collect_dir") remotePeer.remoteUser = readString(entry, "backup_user") remotePeer.rcpCommand = readString(entry, "rcp_command") remotePeer.rshCommand = readString(entry, "rsh_command") remotePeer.cbackCommand = readString(entry, "cback_command") remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") remotePeer.managed = readBoolean(entry, "managed") managedActions = readString(entry, "managed_actions") remotePeer.managedActions = parseCommaSeparatedString(managedActions) remotePeers.append(remotePeer) if localPeers == []: localPeers = None if remotePeers == []: remotePeers = None return (localPeers, remotePeers) @staticmethod def _parseDependencies(parentNode): """ Reads extended action dependency information from a parent node. We read the following individual fields:: runBefore depends/run_before runAfter depends/run_after Each of these fields is a comma-separated list of action names. The result is placed into an C{ActionDependencies} object. If the dependencies parent node does not exist, C{None} will be returned. Otherwise, an C{ActionDependencies} object will always be created, even if it does not contain any actual dependencies in it. @param parentNode: Parent node to search beneath. @return: C{ActionDependencies} object or C{None}. @raise ValueError: If the data at the location can't be read """ sectionNode = readFirstChild(parentNode, "depends") if sectionNode is None: return None else: runBefore = readString(sectionNode, "run_before") runAfter = readString(sectionNode, "run_after") beforeList = parseCommaSeparatedString(runBefore) afterList = parseCommaSeparatedString(runAfter) return ActionDependencies(beforeList, afterList) @staticmethod def _parseBlankBehavior(parentNode): """ Reads a single C{BlankBehavior} object from immediately beneath the parent. We read the following individual fields:: blankMode blank_behavior/mode blankFactor blank_behavior/factor @param parentNode: Parent node to search beneath. @return: C{BlankBehavior} object or C{None} if none if the section is not found @raise ValueError: If some filled-in value is invalid. """ blankBehavior = None sectionNode = readFirstChild(parentNode, "blank_behavior") if sectionNode is not None: blankBehavior = BlankBehavior() blankBehavior.blankMode = readString(sectionNode, "mode") blankBehavior.blankFactor = readString(sectionNode, "factor") return blankBehavior ######################################## # High-level methods for generating XML ######################################## def _extractXml(self): """ Internal method to extract configuration into an XML string. This method assumes that the internal L{validate} method has been called prior to extracting the XML, if the caller cares. No validation will be done internally. As a general rule, fields that are set to C{None} will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or C{None}, the container tag will be empty. """ (xmlDom, parentNode) = createOutputDom() Config._addReference(xmlDom, parentNode, self.reference) Config._addExtensions(xmlDom, parentNode, self.extensions) Config._addOptions(xmlDom, parentNode, self.options) Config._addPeers(xmlDom, parentNode, self.peers) Config._addCollect(xmlDom, parentNode, self.collect) Config._addStage(xmlDom, parentNode, self.stage) Config._addStore(xmlDom, parentNode, self.store) Config._addPurge(xmlDom, parentNode, self.purge) xmlData = serializeDom(xmlDom) xmlDom.unlink() return xmlData @staticmethod def _addReference(xmlDom, parentNode, referenceConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator If C{referenceConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param referenceConfig: Reference configuration section to be added to the document. """ if referenceConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "reference") addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator) @staticmethod def _addExtensions(xmlDom, parentNode, extensionsConfig): """ Adds an configuration section as the next child of a parent. We add the following fields to the document:: order_mode //cb_config/extensions/order_mode We also add groups of the following items, one list element per item:: actions //cb_config/extensions/action The extended action entries are added by L{_addExtendedAction}. If C{extensionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param extensionsConfig: Extensions configuration section to be added to the document. """ if extensionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "extensions") addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) if extensionsConfig.actions is not None: for action in extensionsConfig.actions: Config._addExtendedAction(xmlDom, sectionNode, action) @staticmethod def _addOptions(xmlDom, parentNode, optionsConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions We also add groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/pre_action_hook hooks //cb_config/options/post_action_hook The individual override items are added by L{_addOverride}. The individual hook items are added by L{_addHook}. If C{optionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param optionsConfig: Options configuration section to be added to the document. """ if optionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "options") addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) if optionsConfig.overrides is not None: for override in optionsConfig.overrides: Config._addOverride(xmlDom, sectionNode, override) if optionsConfig.hooks is not None: for hook in optionsConfig.hooks: Config._addHook(xmlDom, sectionNode, hook) @staticmethod def _addPeers(xmlDom, parentNode, peersConfig): """ Adds a configuration section as the next child of a parent. We add groups of the following items, one list element per item:: localPeers //cb_config/peers/peer remotePeers //cb_config/peers/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{peersConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param peersConfig: Peers configuration section to be added to the document. """ if peersConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peers") if peersConfig.localPeers is not None: for localPeer in peersConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if peersConfig.remotePeers is not None: for remotePeer in peersConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addCollect(xmlDom, parentNode, collectConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also add groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The individual collect files are added by L{_addCollectFile} and individual collect directories are added by L{_addCollectDir}. If C{collectConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectConfig: Collect configuration section to be added to the document. """ if collectConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "collect") addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectConfig.absoluteExcludePaths is not None: for absolutePath in collectConfig.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectConfig.excludePatterns is not None: for pattern in collectConfig.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) if collectConfig.collectFiles is not None: for collectFile in collectConfig.collectFiles: Config._addCollectFile(xmlDom, sectionNode, collectFile) if collectConfig.collectDirs is not None: for collectDir in collectConfig.collectDirs: Config._addCollectDir(xmlDom, sectionNode, collectDir) @staticmethod def _addStage(xmlDom, parentNode, stageConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/stage/staging_dir We also add groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{stageConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param stageConfig: Stage configuration section to be added to the document. """ if stageConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "stage") addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) if stageConfig.localPeers is not None: for localPeer in stageConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if stageConfig.remotePeers is not None: for remotePeer in stageConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addStore(xmlDom, parentNode, storeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject refreshMediaDelay //cb_config/store/refresh_media_delay ejectDelay //cb_config/store/eject_delay Blanking behavior configuration is added by the L{_addBlankBehavior} method. If C{storeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param storeConfig: Store configuration section to be added to the document. """ if storeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "store") addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior) @staticmethod def _addPurge(xmlDom, parentNode, purgeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: purgeDirs //cb_config/purge/dir The individual directory entries are added by L{_addPurgeDir}. If C{purgeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeConfig: Purge configuration section to be added to the document. """ if purgeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "purge") if purgeConfig.purgeDirs is not None: for purgeDir in purgeConfig.purgeDirs: Config._addPurgeDir(xmlDom, sectionNode, purgeDir) @staticmethod def _addExtendedAction(xmlDom, parentNode, action): """ Adds an extended action container as the next child of a parent. We add the following fields to the document:: name action/name module action/module function action/function index action/index dependencies action/depends Dependencies are added by the L{_addDependencies} method. The node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the C{ExtensionsConfig} object. If C{action} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param action: Purge directory to be added to the document. """ if action is not None: sectionNode = addContainerNode(xmlDom, parentNode, "action") addStringNode(xmlDom, sectionNode, "name", action.name) addStringNode(xmlDom, sectionNode, "module", action.module) addStringNode(xmlDom, sectionNode, "function", action.function) addIntegerNode(xmlDom, sectionNode, "index", action.index) Config._addDependencies(xmlDom, sectionNode, action.dependencies) @staticmethod def _addOverride(xmlDom, parentNode, override): """ Adds a command override container as the next child of a parent. We add the following fields to the document:: command override/command absolutePath override/abs_path The node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the C{OptionsConfig} object. If C{override} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param override: Command override to be added to the document. """ if override is not None: sectionNode = addContainerNode(xmlDom, parentNode, "override") addStringNode(xmlDom, sectionNode, "command", override.command) addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath) @staticmethod def _addHook(xmlDom, parentNode, hook): """ Adds an action hook container as the next child of a parent. The behavior varies depending on the value of the C{before} and C{after} flags on the hook. If the C{before} flag is set, it's a pre-action hook, and we'll add the following fields:: action pre_action_hook/action command pre_action_hook/command If the C{after} flag is set, it's a post-action hook, and we'll add the following fields:: action post_action_hook/action command post_action_hook/command The or node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the C{OptionsConfig} object. If C{hook} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param hook: Command hook to be added to the document. """ if hook is not None: if hook.before: sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") else: sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") addStringNode(xmlDom, sectionNode, "action", hook.action) addStringNode(xmlDom, sectionNode, "command", hook.command) @staticmethod def _addCollectFile(xmlDom, parentNode, collectFile): """ Adds a collect file container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode Note that for consistency with collect directory handling we'll only emit the preferred C{collect_mode} tag. The node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the C{CollectConfig} object. If C{collectFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectFile: Collect file to be added to the document. """ if collectFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode) @staticmethod def _addCollectDir(xmlDom, parentNode, collectDir): """ Adds a collect directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode ignoreFile dir/ignore_file linkDepth dir/link_depth dereference dir/dereference recursionLevel dir/recursion_level Note that an original XML document might have listed the collect mode using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. However, here we'll only emit the preferred C{collect_mode} tag. We also add groups of the following items, one list element per item:: absoluteExcludePaths dir/exclude/abs_path relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the C{CollectConfig} object. If C{collectDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectDir: Collect directory to be added to the document. """ if collectDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectDir.absoluteExcludePaths is not None: for absolutePath in collectDir.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if collectDir.excludePatterns is not None: for pattern in collectDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) @staticmethod def _addLocalPeer(xmlDom, parentNode, localPeer): """ Adds a local peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir ignoreFailureMode peer/ignore_failures Additionally, C{peer/type} is filled in with C{"local"}, since this is a local peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{localPeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param localPeer: Purge directory to be added to the document. """ if localPeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", localPeer.name) addStringNode(xmlDom, sectionNode, "type", "local") addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode) @staticmethod def _addRemotePeer(xmlDom, parentNode, remotePeer): """ Adds a remote peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir remoteUser peer/backup_user rcpCommand peer/rcp_command rcpCommand peer/rcp_command rshCommand peer/rsh_command cbackCommand peer/cback_command ignoreFailureMode peer/ignore_failures managed peer/managed managedActions peer/managed_actions Additionally, C{peer/type} is filled in with C{"remote"}, since this is a remote peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{remotePeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param remotePeer: Purge directory to be added to the document. """ if remotePeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", remotePeer.name) addStringNode(xmlDom, sectionNode, "type", "remote") addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) @staticmethod def _addPurgeDir(xmlDom, parentNode, purgeDir): """ Adds a purge directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path retainDays dir/retain_days The node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the C{PurgeConfig} object. If C{purgeDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeDir: Purge directory to be added to the document. """ if purgeDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays) @staticmethod def _addDependencies(xmlDom, parentNode, dependencies): """ Adds a extended action dependencies to parent node. We add the following fields to the document:: runBefore depends/run_before runAfter depends/run_after If C{dependencies} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param dependencies: C{ActionDependencies} object to be added to the document """ if dependencies is not None: sectionNode = addContainerNode(xmlDom, parentNode, "depends") runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) runAfter = Config._buildCommaSeparatedString(dependencies.afterList) addStringNode(xmlDom, sectionNode, "run_before", runBefore) addStringNode(xmlDom, sectionNode, "run_after", runAfter) @staticmethod def _buildCommaSeparatedString(valueList): """ Creates a comma-separated string from a list of values. As a special case, if C{valueList} is C{None}, then C{None} will be returned. @param valueList: List of values to be placed into a string @return: Values from valueList as a comma-separated string. """ if valueList is None: return None return ",".join(valueList) @staticmethod def _addBlankBehavior(xmlDom, parentNode, blankBehavior): """ Adds a blanking behavior container as the next child of a parent. We add the following fields to the document:: blankMode blank_behavior/mode blankFactor blank_behavior/factor The node itself is created as the next child of the parent node. If C{blankBehavior} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param blankBehavior: Blanking behavior to be added to the document. """ if blankBehavior is not None: sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor) ################################################# # High-level methods used for validating content ################################################# def _validateContents(self): """ Validates configuration contents per rules discussed in module documentation. This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to C{None} is validated per the rules for that section, laid out in the module documentation (above). @raise ValueError: If configuration is invalid. """ self._validateReference() self._validateExtensions() self._validateOptions() self._validatePeers() self._validateCollect() self._validateStage() self._validateStore() self._validatePurge() def _validateReference(self): """ Validates reference configuration. There are currently no reference-related validations. @raise ValueError: If reference configuration is invalid. """ pass def _validateExtensions(self): """ Validates extensions configuration. The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module, and a function. Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required. @raise ValueError: If reference configuration is invalid. """ if self.extensions is not None: if self.extensions.actions is not None: names = [] for action in self.extensions.actions: if action.name is None: raise ValueError("Each extended action must set a name.") names.append(action.name) if action.module is None: raise ValueError("Each extended action must set a module.") if action.function is None: raise ValueError("Each extended action must set a function.") if self.extensions.orderMode is None or self.extensions.orderMode == "index": if action.index is None: raise ValueError("Each extended action must set an index, based on order mode.") elif self.extensions.orderMode == "dependency": if action.dependencies is None: raise ValueError("Each extended action must set dependency information, based on order mode.") checkUnique("Duplicate extension names exist:", names) def _validateOptions(self): """ Validates options configuration. All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. @raise ValueError: If reference configuration is invalid. """ if self.options is not None: if self.options.startingDay is None: raise ValueError("Options section starting day must be filled in.") if self.options.workingDir is None: raise ValueError("Options section working directory must be filled in.") if self.options.backupUser is None: raise ValueError("Options section backup user must be filled in.") if self.options.backupGroup is None: raise ValueError("Options section backup group must be filled in.") if self.options.rcpCommand is None: raise ValueError("Options section remote copy command must be filled in.") def _validatePeers(self): """ Validates peers configuration per rules in L{_validatePeerList}. @raise ValueError: If peers configuration is invalid. """ if self.peers is not None: self._validatePeerList(self.peers.localPeers, self.peers.remotePeers) def _validateCollect(self): """ Validates collect configuration. The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. @raise ValueError: If collect configuration is invalid. """ if self.collect is not None: if self.collect.targetDir is None: raise ValueError("Collect section target directory must be filled in.") if self.collect.collectFiles is not None: for collectFile in self.collect.collectFiles: if collectFile.absolutePath is None: raise ValueError("Each collect file must set an absolute path.") if self.collect.collectMode is None and collectFile.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") if self.collect.archiveMode is None and collectFile.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") if self.collect.collectDirs is not None: for collectDir in self.collect.collectDirs: if collectDir.absolutePath is None: raise ValueError("Each collect directory must set an absolute path.") if self.collect.collectMode is None and collectDir.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") if self.collect.archiveMode is None and collectDir.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") if self.collect.ignoreFile is None and collectDir.ignoreFile is None: raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.") def _validateStage(self): """ Validates stage configuration. The target directory must be filled in, and the peers are also validated. Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in L{_validatePeerList}. @raise ValueError: If stage configuration is invalid. """ if self.stage is not None: if self.stage.targetDir is None: raise ValueError("Stage section target directory must be filled in.") if self.peers is None: # In this case, stage configuration is our only configuration and must be valid. self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) else: # In this case, peers configuration is the default and stage configuration overrides. # Validation is only needed if it's stage configuration is actually filled in. if self.stage.hasPeers(): self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) def _validateStore(self): """ Validates store configuration. The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults. If blanking behavior is provided, then both a blanking mode and a blanking factor are required. The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type. @raise ValueError: If store configuration is invalid. """ if self.store is not None: if self.store.sourceDir is None: raise ValueError("Store section source directory must be filled in.") if self.store.mediaType is None: raise ValueError("Store section media type must be filled in.") if self.store.devicePath is None: raise ValueError("Store section device path must be filled in.") if self.store.deviceType == None or self.store.deviceType == "cdwriter": if self.store.mediaType not in VALID_CD_MEDIA_TYPES: raise ValueError("Media type must match device type.") elif self.store.deviceType == "dvdwriter": if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: raise ValueError("Media type must match device type.") if self.store.blankBehavior is not None: if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: raise ValueError("If blanking behavior is provided, all values must be filled in.") def _validatePurge(self): """ Validates purge configuration. The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @raise ValueError: If purge configuration is invalid. """ if self.purge is not None: if self.purge.purgeDirs is not None: for purgeDir in self.purge.purgeDirs: if purgeDir.absolutePath is None: raise ValueError("Each purge directory must set an absolute path.") if purgeDir.retainDays is None: raise ValueError("Each purge directory must set a retain days value.") def _validatePeerList(self, localPeers, remotePeers): """ Validates the set of local and remote peers. Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. @param localPeers: List of local peers @param remotePeers: List of remote peers @raise ValueError: If stage configuration is invalid. """ if localPeers is None and remotePeers is None: raise ValueError("Peer list must contain at least one backup peer.") if localPeers is None and remotePeers is not None: if len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is None: if len(localPeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is not None: if len(localPeers) + len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") names = [] if localPeers is not None: for localPeer in localPeers: if localPeer.name is None: raise ValueError("Local peers must set a name.") names.append(localPeer.name) if localPeer.collectDir is None: raise ValueError("Local peers must set a collect directory.") if remotePeers is not None: for remotePeer in remotePeers: if remotePeer.name is None: raise ValueError("Remote peers must set a name.") names.append(remotePeer.name) if remotePeer.collectDir is None: raise ValueError("Remote peers must set a collect directory.") if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: raise ValueError("Remote user must either be set in options section or individual remote peer.") if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: raise ValueError("Remote copy command must either be set in options section or individual remote peer.") if remotePeer.managed: if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: raise ValueError("Remote shell command must either be set in options section or individual remote peer.") if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: raise ValueError("Remote cback command must either be set in options section or individual remote peer.") if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): raise ValueError("Managed actions list must be set in options section or individual remote peer.") checkUnique("Duplicate peer names exist:", names) ######################################################################## # General utility functions ######################################################################## def readByteQuantity(parent, name): """ Read a byte size value from an XML document. A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: ByteQuantity parsed from XML document """ data = readString(parent, name) if data is None: return None data = data.strip() if data.endswith("KB"): quantity = data[0:data.rfind("KB")].strip() units = UNIT_KBYTES elif data.endswith("MB"): quantity = data[0:data.rfind("MB")].strip() units = UNIT_MBYTES elif data.endswith("GB"): quantity = data[0:data.rfind("GB")].strip() units = UNIT_GBYTES else: quantity = data.strip() units = UNIT_BYTES return ByteQuantity(quantity, units) def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity): """ Adds a text node as the next child of a parent, to contain a byte size. If the C{byteQuantity} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413"). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param byteQuantity: ByteQuantity object to put into the XML document @return: Reference to the newly-created node. """ if byteQuantity is None: byteString = None elif byteQuantity.units == UNIT_KBYTES: byteString = "%s KB" % byteQuantity.quantity elif byteQuantity.units == UNIT_MBYTES: byteString = "%s MB" % byteQuantity.quantity elif byteQuantity.units == UNIT_GBYTES: byteString = "%s GB" % byteQuantity.quantity else: byteString = byteQuantity.quantity return addStringNode(xmlDom, parentNode, nodeName, byteString) CedarBackup2-2.22.0/CREDITS0000664000175000017500000002540112122615533016567 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 2 # Revision : $Id: CREDITS 1029 2013-03-21 14:38:17Z pronovic $ # Purpose : Credits for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ########## # Credits ########## Most of the source code in this project was written by Kenneth J. Pronovici. Some portions have been based on other pieces of open-source software, as indicated in the source code itself. Unless otherwise indicated, all Cedar Backup source code is Copyright (c) 2004-2011,2013 Kenneth J. Pronovici and is released under the GNU General Public License, version 2. The contents of the GNU General Public License can be found in the LICENSE file, or can be downloaded from http://www.gnu.org/. Various patches have been contributed to the Cedar Backup codebase by Dmitry Rutsky. Major contributions include the initial implementation for the optimized media blanking strategy as well as improvements to the DVD writer implementation. The PostgreSQL extension was contributed by Antoine Beaupre ("The Anarcat"), based on the existing MySQL extension. Lukasz K. Nowak helped debug the split functionality and also provided patches for parts of the documentation. Zoran Bosnjak contributed changes to collect.py to implement recursive collect behavior based on recursion level. Jan Medlock contributed patches to improve the manpage and to support recent versions of the /usr/bin/split command. Minor code snippets derived from newsgroup and mailing list postings are not generally attributed unless I used someone else's source code verbatim. Source code annotated as "(c) 2001, 2002 Python Software Foundation" was originally taken from or derived from code within the Python 2.3 codebase. This code was released under the Python 2.3 license, which is an MIT-style academic license. Items under this license include the function util.getFunctionReference(). Source code annotated as "(c) 2000-2004 CollabNet" was originally released under the CollabNet License, which is an Apache/BSD-style license. Items under this license include basic markup and stylesheets used in creating the user manual. The dblite.dtd and readme-dblite.html files are also assumed to be under the CollabNet License, since they were found as part of the Subversion source tree and did not specify an explicit copyright notice. Some of the PDF-specific graphics in the user manual (now obsolete and orphaned off in the doc/pdf directory) were either directly taken from or were derived from images distributed in Norman Walsh's Docbook XSL distribution. These graphics are (c) 1999, 2000, 2001 Norman Walsh and were originally released under a BSD-style license as documented below. Source code annotated as "(c) 2000 Fourthought Inc, USA" was taken from or derived from code within the PyXML distribution and was originally part of the 4DOM suite developed by Fourthought, Inc. Fourthought released the code under a BSD-like license. Items under this license include the XML pretty-printing functionality implemented in xmlutil.py. #################### # CollabNet License #################### /* ================================================================ * Copyright (c) 2000-2004 CollabNet. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * 3. The end-user documentation included with the redistribution, if * any, must include the following acknowledgment: "This product includes * software developed by CollabNet (http://www.Collab.Net/)." * Alternately, this acknowledgment may appear in the software itself, if * and wherever such third-party acknowledgments normally appear. * * 4. The hosted project names must not be used to endorse or promote * products derived from this software without prior written * permission. For written permission, please contact info@collab.net. * * 5. Products derived from this software may not use the "Tigris" name * nor may "Tigris" appear in their names without prior written * permission of CollabNet. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL COLLABNET OR ITS CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of CollabNet. */ ##################### # Python 2.3 License ##################### PSF LICENSE AGREEMENT FOR PYTHON 2.3 ------------------------------------ 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 2.3 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 2.3 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved" are retained in Python 2.3 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 2.3. 4. PSF is making Python 2.3 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python 2.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. ################## # Docbook License ################## Copyright --------- Copyright (C) 1999, 2000, 2001 Norman Walsh Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Except as contained in this notice, the names of individuals credited with contribution to this software shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization from the individuals in question. Any stylesheet derived from this Software that is publically distributed will be identified with a different name and the version strings in any derived Software will be changed so that no possibility of confusion between the derived package and this Software will exist. Warranty -------- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL NORMAN WALSH OR ANY OTHER CONTRIBUTOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###################### # Fourthought License ###################### Copyright (c) 2000 Fourthought Inc, USA All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of FourThought LLC not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. FOURTHOUGHT LLC DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL FOURTHOUGHT BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. CedarBackup2-2.22.0/LICENSE0000664000175000017500000004310511163707065016563 0ustar pronovicpronovic00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. CedarBackup2-2.22.0/manual/0002775000175000017500000000000012143054372017025 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/manual/Makefile0000664000175000017500000000775011163746605020504 0ustar pronovicpronovic00000000000000# vim: set ft=make: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Make # Project : Cedar Backup, release 2 # Revision : $Id: Makefile 936 2009-03-29 19:35:00Z pronovic $ # Purpose : Makefile used for building the Cedar Backup manual. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## # This Makefile was originally taken from the Subversion project's book # (http://svnbook.red-bean.com/) and has been substantially modifed (almost # completely rewritten) for use with Cedar Backup. # # The original Makefile was (c) 2000-2004 CollabNet (see CREDITS). ######################## # Programs and commands ######################## CP = cp INSTALL = install MKDIR = mkdir RM = rm XSLTPROC = xsltproc W3M = w3m ############ # Locations ############ INSTALL_DIR = ../doc/manual XSL_DIR = ../util/docbook STYLES_CSS = $(XSL_DIR)/styles.css XSL_FO = $(XSL_DIR)/fo-stylesheet.xsl XSL_HTML = $(XSL_DIR)/html-stylesheet.xsl XSL_CHUNK = $(XSL_DIR)/chunk-stylesheet.xsl MANUAL_TOP = . MANUAL_DIR = $(MANUAL_TOP)/src MANUAL_CHUNK_DIR = $(MANUAL_DIR)/chunk MANUAL_HTML_TARGET = $(MANUAL_DIR)/manual.html MANUAL_CHUNK_TARGET = $(MANUAL_CHUNK_DIR)/index.html # index.html is created last MANUAL_TEXT_TARGET = $(MANUAL_DIR)/manual.txt MANUAL_XML_SOURCE = $(MANUAL_DIR)/book.xml MANUAL_ALL_SOURCE = $(MANUAL_DIR)/*.xml MANUAL_HTML_IMAGES = $(MANUAL_DIR)/images/html/*.png ############################################# # High-level targets and simple dependencies ############################################# all: manual-html manual-chunk install: install-manual-html install-manual-chunk install-manual-text clean: -@$(RM) -f $(MANUAL_HTML_TARGET) $(MANUAL_FO_TARGET) $(MANUAL_TEXT_TARGET) -@$(RM) -rf $(MANUAL_CHUNK_DIR) $(INSTALL_DIR): $(INSTALL) --mode=775 -d $(INSTALL_DIR) ################### # HTML build rules ################### manual-html: $(MANUAL_HTML_TARGET) $(MANUAL_HTML_TARGET): $(MANUAL_ALL_SOURCE) $(XSLTPROC) --output $(MANUAL_HTML_TARGET) $(XSL_HTML) $(MANUAL_XML_SOURCE) install-manual-html: $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ########################### # Chunked HTML build rules ##################*######## manual-chunk: $(MANUAL_CHUNK_TARGET) # The trailing slash in the $(XSLTPROC) command is essential, so that xsltproc will output pages to the dir $(MANUAL_CHUNK_TARGET): $(MANUAL_ALL_SOURCE) $(STYLES_CSS) $(MANUAL_HTML_IMAGES) $(MKDIR) -p $(MANUAL_CHUNK_DIR) $(MKDIR) -p $(MANUAL_CHUNK_DIR)/images $(XSLTPROC) --output $(MANUAL_CHUNK_DIR)/ $(XSL_CHUNK) $(MANUAL_XML_SOURCE) $(CP) $(STYLES_CSS) $(MANUAL_CHUNK_DIR) $(CP) $(MANUAL_HTML_IMAGES) $(MANUAL_CHUNK_DIR)/images install-manual-chunk: $(MANUAL_CHUNK_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_CHUNK_DIR)/*.html $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ################### # Text build rules ################### manual-text: manual-html $(MANUAL_TEXT_TARGET) $(MANUAL_TEXT_TARGET): $(W3M) -dump -cols 80 $(MANUAL_HTML_TARGET) > $(MANUAL_TEXT_TARGET) install-manual-text: $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) CedarBackup2-2.22.0/manual/src/0002775000175000017500000000000012143054372017614 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/manual/src/book.xml0000664000175000017500000000777111415165677021316 0ustar pronovicpronovic00000000000000 ]> Cedar Backup Software Manual First Kenneth J. Pronovici Juliana E. Pronovici 2005-2008 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. &preface; &intro; &basic; &install; &commandline; &config; &extensions; &extenspec; &depends; &recovering; &securingssh; ©right; CedarBackup2-2.22.0/manual/src/config.xml0000664000175000017500000061205712143053423021610 0ustar pronovicpronovic00000000000000 Configuration Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in . In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in . Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over (in ) to become familiar with the command line interface. Then, look over (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML See for a basic introduction to XML. configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. See , in . The extensions section is always optional and can be omitted unless extensions are in use. Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes a stripped config file in /etc/cback.conf and a larger sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. <?xml version="1.0"?> <cb_config> <reference> <author>Kenneth J. Pronovici</author> <revision>1.3</revision> <description>Sample</description> </reference> <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>group</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> </options> <peers> <peer> <name>debian</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> </peers> <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <dir> <abs_path>/etc</abs_path> <collect_mode>incr</collect_mode> </dir> <file> <abs_path>/home/root/.profile</abs_path> <collect_mode>weekly</collect_mode> </file> </collect> <stage> <staging_dir>/opt/backup/staging</staging_dir> </stage> <store> <source_dir>/opt/backup/staging</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> </store> <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> </cb_config> Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: <reference> <author>Kenneth J. Pronovici</author> <revision>Revision 1.3</revision> <description>Sample</description> <generator>Yet to be Written Config Tool (tm)</description> </reference> The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>backup</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> <override> <command>cdrecord</command> <abs_path>/opt/local/bin/cdrecord</abs_path> </override> <override> <command>mkisofs</command> <abs_path>/opt/local/bin/mkisofs</abs_path> </override> <pre_action_hook> <action>collect</action> <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command> </pre_action_hook> <post_action_hook> <action>collect</action> <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command> </post_action_hook> </options> The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. cdrecord. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: <peers> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <ignore_failures>all</ignore_failures> </peer> <peer> <name>machine3</name> <type>remote</type> <managed>Y</managed> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <rcp_command>/usr/bin/scp</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> </peer> </peers> The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <exclude> <abs_path>/etc</abs_path> <pattern>.*\.conf</pattern> </exclude> <file> <abs_path>/home/root/.profile</abs_path> </file> <dir> <abs_path>/etc</abs_path> </dir> <dir> <abs_path>/var/log</abs_path> <collect_mode>incr</collect_mode> </dir> <dir> <abs_path>/opt</abs_path> <collect_mode>weekly</collect_mode> <exclude> <abs_path>/opt/large</abs_path> <rel_path>backup</rel_path> <pattern>.*tmp</pattern> </exclude> </dir> </collect> The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See (in ) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. See It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ≥ 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: <stage> <staging_dir>/opt/backup/stage</staging_dir> </stage> This is an example stage configuration section that overrides the default list of peers: <stage> <staging_dir>/opt/backup/stage</staging_dir> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> </peer> </stage> The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: <store> <source_dir>/opt/backup/stage</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> <no_eject>N</no_eject> <refresh_media_delay>15</refresh_media_delay> <eject_delay>2</eject_delay> <blank_behavior> <mode>weekly</mode> <factor>1.3</factor> </blank_behavior> </store> The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see (in ). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ≥ 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ≥ 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ≥ 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see . This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ≥ 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ≥ 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99. This is how the hypothetical action would be configured: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>99</index> </action> </extensions> The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ≥ 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs son the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. See SF Bug Tracking at . To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback all You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. See for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 06 * * * root cback purge You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. See in . For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 02 * * * root cback stage 30 04 * * * root cback store 30 06 * * * root cback purge You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>. Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO () or the ATA RAID HOWTO () for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ≤ blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. CedarBackup2-2.22.0/manual/src/install.xml0000664000175000017500000003125211415165677022021 0ustar pronovicpronovic00000000000000 Installation Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. If you would like to use Cedar Backup on a non-Linux system, you should install the Python source distribution along with all of the indicated dependencies. Then, please report back to the Cedar Backup Users mailing list See SF Mailing Lists at . with information about your platform and any problems you encountered. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup.) Otherwise, you need to install from the Cedar Solutions APT data source. To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. See SF Bug Tracking at . After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup2 cedar-backup2-doc Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. See . In either case, once the package has been installed, you can proceed to configuration as described in . The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. See . You will have to manage dependencies on your own. Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out . This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python and requires version 2.5 or greater of the language. Python 2.5 was released on 19 Sep 2006, so by now most current Linux and BSD distributions should include it. You must install Python on every peer node in a pool (master or client). Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: mkisofs eject mount unmount volname Then, you need this utility if you are writing CD media: cdrecord or these utilities if you are writing DVD media: growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, untar it: $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup2-2.0.0 $ python setup.py install Make sure that you are using Python 2.5 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. support@cedar-solutions.com This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the option: $ python setup.py --help $ python setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in . CedarBackup2-2.22.0/manual/src/recovering.xml0000664000175000017500000007323411415165677022524 0ustar pronovicpronovic00000000000000 Data Recovery Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using : root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, , or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion () or the Subversion FAQ (). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command. The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). CedarBackup2-2.22.0/manual/src/extenspec.xml0000664000175000017500000002142611415165677022353 0ustar pronovicpronovic00000000000000 Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>101</index> </action> </extensions> In this case, the action database has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: Extensions may not write to stdout or stderr using functions such as print or sys.write. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. Extensions may not return any value. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: <database> <repository>/path/to/repo1</repository> <repository>/path/to/repo2</repository> </database> In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. CedarBackup2-2.22.0/manual/src/basic.xml0000664000175000017500000011036611415165677021440 0ustar pronovicpronovic00000000000000 Basic Concepts General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuidSee or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See for more information on how Cedar Backup is configured. You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also . Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in ) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple consolidation point. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file Analagous to .cvsignore in CVS or specify absolute paths or filename patterns In terms of Python regular expressions to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the <warn_midnite> configuration value). This behavior varies slightly when the option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available (for instance, SourceForge shell accounts). When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVD±RW drive. When using a new enough backup device, a new multisession ISO image An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: . is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: . for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step. Prior to Cedar Backup 2.0, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2.0, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process, (i.e. not collect, stage, store or purge) but can be executed by Cedar Backup when properly configured. Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action. Hopefully, as the Cedar Backup 2.0 user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Users should see for more information on how extensions are configured, and for details on all of the officially-supported extensions. Developers may be interested in . CedarBackup2-2.22.0/manual/src/commandline.xml0000664000175000017500000010205611415165677022642 0ustar pronovicpronovic00000000000000 Command Line Tools Overview Cedar Backup comes with two command-line programs, the cback and cback-span commands. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need. Users that have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs. The <command>cback</command> command Introduction Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process. Syntax The cback command has the following syntax: Usage: cback [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing. Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Run quietly (display no output to the screen). , Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. , Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. , Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. , Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. , Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in (in ). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The <command>cback-span</command> command Introduction Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs. cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback-span command has the following syntax: Usage: cback-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback.conf) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using <command>cback-span</command> As discussed above, the cback-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... CedarBackup2-2.22.0/manual/src/securingssh.xml0000664000175000017500000002371611415165677022716 0ustar pronovicpronovic00000000000000 Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback --full collect /usr/bin/cback collect Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. CedarBackup2-2.22.0/manual/src/depends.xml0000664000175000017500000005634111415165677022003 0ustar pronovicpronovic00000000000000 Dependencies Python 2.5 Version 2.5 of the Python interpreter was released on 19 Sep 2006, so most current Linux and BSD distributions should include it. Source URL upstream Debian Gentoo RPM Mac OS X (fink) If you can't find a package for your system, install from the package source, using the upstream link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. Source URL upstream Debian Gentoo RPM Mac OS X built-in If you can't find SSH client or server packages for your system, install from the package source, using the upstream link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. Source URL upstream Debian Gentoo unknown RPM Mac OS X (fink) If you can't find a package for your system, install from the package source, using the upstream link. I have classified Gentoo as unknown because I can't find a specific package for that platform. I think that maybe mkisofs is part of the cdrtools package (see below), but I'm not sure. Any Gentoo users want to enlighten me? cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. Source URL upstream Debian Gentoo RPM Mac OS X (fink) If you can't find a package for your system, install from the package source, using the upstream link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. Source URL upstream Debian Gentoo RPM Mac OS X (fink) If you can't find a package for your system, install from the package source, using the upstream link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. Source URL upstream Debian Gentoo RPM Mac OS X (fink) If you can't find a package for your system, install from the package source, using the upstream link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. Source URL upstream Debian Gentoo unknown RPM Mac OS X built-in If you can't find a package for your system, install from the package source, using the upstream link. I have classified Gentoo as unknown because I can't find a specific package for that platform. It may just be that these two utilities are considered standard, and don't have an independent package of their own. Any Gentoo users want to enlighten me? I have classified Mac OS X built-in because that operating system does contain a mount command. However, it isn't really compatible with Cedar Backup's idea of mount, and in fact what Cedar Backup needs is closer to the hdiutil command. However, there are other issues related to that command, which is why the store action is not really supported on Mac OS X. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. Source URL upstream Debian Gentoo RPM Mac OS X If you can't find a package for your system, install from the package source, using the upstream link. gpg The gpg command is used by the encrypt extension to encrypt files. Source URL upstream Debian Gentoo RPM Mac OS X If you can't find a package for your system, install from the package source, using the upstream link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. CedarBackup2-2.22.0/manual/src/intro.xml0000664000175000017500000003344711415165677021516 0ustar pronovicpronovic00000000000000 Introduction
Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.
What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. There are many different backup software implementations out there in the free software and open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data to CD or DVD on a regular basis. Cedar Backup isn't for you if you want to back up your MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, a CVS or Subversion repository, or a small MySQL database, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided in . How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to write the Cedar Backup Users mailing list. See SF Mailing Lists at . This is a public list for all Cedar Backup users. If you write to this list, you might get help from me, or from some other user who has experienced the same thing you have. If you know that the problem you have found constitutes a bug, or if you would like to make an enhancement request, then feel free to file a bug report in the Cedar Solutions Bug Tracking System. See SF Bug Tracking at . If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write support@cedar-solutions.com. That mail will go directly to me or to someone else who can help you. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. See Simon Tatham's excellent bug reporting tutorial: . In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. See . At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) Debian's stable releases are named after characters in the Toy Story movie. and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. Since then, we have continued to use Cedar Backup for those sites, and Cedar Backup has picked up a handful of other users who have occasionally reported bugs or requested minor enhancements. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, Epydoc is a Python code documentation tool. See . and updated the code to use the newly-released Python logging package See . after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result is the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. Tests are implemented using Python's unit test framework. See .
CedarBackup2-2.22.0/manual/src/preface.xml0000664000175000017500000002341411415165677021761 0ustar pronovicpronovic00000000000000 Preface Purpose This software manual has been written to document the 2.0 series of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons This icon designates a note relating to the surrounding text. This icon designates a helpful tip relating to the surrounding text. This icon designates a warning relating to the surrounding text. Organization of This Manual Provides some background about how Cedar Backup came to be, its history, some general information about what needs it is intended to meet, etc. Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Discusses the various Cedar Backup command-line tools, including the primary cback command. Provides detailed information about how to configure Cedar Backup. Describes each of the officially-supported Cedar Backup extensions. Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Many thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. There are not very many Cedar Backup users today, but almost all of them have contributed in some way to the documentation in this manual, either by asking questions, making suggestions or finding bugs. I'm glad to have them as users, and I hope that this new release meets their needs even better than the previous release. My wife Julie puts up with a lot. It's sometimes not easy to live with someone who hacks on open source code in his free time — even when you're a pretty good engineer yourself, like she is. First, she managed to live with a dual-boot Debian and Windoze machine; then she managed to get used to IceWM rather than a prettier desktop; and eventually she even managed to cope with vim when she needed to. Now, even after all that, she has graciously volunteered to edit this manual. I much appreciate her skill with a red pen. CedarBackup2-2.22.0/manual/src/copyright.xml0000664000175000017500000004252011415165677022363 0ustar pronovicpronovic00000000000000 Copyright Copyright (c) 2005-2010 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup2-2.22.0/manual/src/images/0002775000175000017500000000000012143054371021060 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/manual/src/images/html/0002775000175000017500000000000012143054372022025 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/manual/src/images/html/note.png0000664000175000017500000000317211163707064023505 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup2-2.22.0/manual/src/images/html/warning.png0000664000175000017500000000301611163707064024202 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup2-2.22.0/manual/src/images/html/info.png0000664000175000017500000000443311163707064023474 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup2-2.22.0/manual/src/extensions.xml0000664000175000017500000020507611415165677022561 0ustar pronovicpronovic00000000000000 Official Extensions System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. Currently-installed Debian packages via dpkg --get-selections Disk partition information via fdisk -l System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>sysinfo</name> <module>CedarBackup2.extend.sysinfo</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion See version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. For instance, see the Backups section on this page: To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>subversion</name> <module>CedarBackup2.extend.subversion</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: <subversion> <collect_mode>incr</collect_mode> <compress_mode>bzip2</compress_mode> <repository> <abs_path>/opt/public/svn/docs</abs_path> </repository> <repository> <abs_path>/opt/public/svn/web</abs_path> <compress_mode>gzip</compress_mode> </repository> <repository_dir> <abs_path>/opt/private/svn</abs_path> <collect_mode>daily</collect_mode> </repository_dir> </subversion> The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line and switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = <secret> Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mysql</name> <module>CedarBackup2.extend.mysql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: <mysql> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: <mysql> <user>root</user> <password>password</password> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>postgresql</name> <module>CedarBackup2.extend.postgresql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>Y</all> </postgresql> If you decide to back up specific databases, then you would list them individually, like this: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>N</all> <database>db1</database> <database>db2</database> </postgresql> The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mbox</name> <module>CedarBackup2.extend.mbox</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: <mbox> <collect_mode>incr</collect_mode> <compress_mode>gzip</compress_mode> <file> <abs_path>/home/user1/mail/greylist</abs_path> <collect_mode>daily</collect_mode> </file> <dir> <abs_path>/home/user2/mail</abs_path> </dir> <dir> <abs_path>/home/user3/mail</abs_path> <exclude> <rel_path>spam</rel_path> <pattern>.*debian.*</pattern> </exclude> </dir> </mbox> Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>encrypt</name> <module>CedarBackup2.extend.encrypt</module> <function>executeAction</function> <index>301</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: <encrypt> <encrypt_mode>gpg</encrypt_mode> <encrypt_target>Backup User</encrypt_target> </encrypt> The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>split</name> <module>CedarBackup2.extend.split</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: <split> <size_limit>250 MB</size_limit> <split_size>100 MB</split_size> </split> The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>capacity</name> <module>CedarBackup2.extend.capacity</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: <capacity> <max_percentage>95.5</max_percentage> </capacity> This example configures the extension to warn if the media has fewer than 16 MB free: <capacity> <min_bytes>16 MB</min_bytes> </capacity> The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. CedarBackup2-2.22.0/setup.py0000775000175000017500000000520011645150366017266 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: setup.py 1022 2011-10-11 23:27:49Z pronovic $ # Purpose : Python distutils setup script # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0111 ######################################################################## # Imported modules ######################################################################## from distutils.core import setup from CedarBackup2.release import AUTHOR, EMAIL, VERSION, COPYRIGHT, URL ######################################################################## # Setup configuration ######################################################################## LONG_DESCRIPTION = """ Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. """ setup ( name = 'CedarBackup2', version = VERSION, description = 'Implements local and remote backups to CD/DVD media.', long_description = LONG_DESCRIPTION, keywords = ('local', 'remote', 'backup', 'scp', 'CD-R', 'CD-RW', 'DVD+R', 'DVD+RW',), author = AUTHOR, author_email = EMAIL, url = URL, license = "Copyright (c) %s %s. Licensed under the GNU GPL." % (COPYRIGHT, AUTHOR), platforms = ('Any',), packages = ['CedarBackup2', 'CedarBackup2.actions', 'CedarBackup2.extend', 'CedarBackup2.tools', 'CedarBackup2.writers', ], scripts = ['cback', 'util/cback-span', ], ) CedarBackup2-2.22.0/util/0002775000175000017500000000000012143054372016525 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/util/docbook/0002775000175000017500000000000012143054372020145 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/util/docbook/chunk-stylesheet.xsl0000664000175000017500000000434611163707063024204 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup2-2.22.0/util/docbook/styles.css0000664000175000017500000000675111163707063022214 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 2 * Revision : $Id: styles.css 245 2005-01-28 23:41:19Z pronovic $ * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup2-2.22.0/util/docbook/dblite.dtd0000664000175000017500000005070411163707063022114 0ustar pronovicpronovic00000000000000 %db; CedarBackup2-2.22.0/util/docbook/html-stylesheet.xsl0000664000175000017500000000435511163707063024040 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup2-2.22.0/util/cback-span0000775000175000017500000000151111415154516020453 0ustar pronovicpronovic00000000000000#!/usr/bin/python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: cback 605 2005-02-25 00:51:07Z pronovic $ # Purpose : Implements Cedar Backup cback-span script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback-span script. @author: Kenneth J. Pronovici """ import sys from CedarBackup2.tools.span import cli result = cli() sys.exit(result) CedarBackup2-2.22.0/util/knapsackdemo.py0000775000175000017500000001364411415165677021564 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: knapsackdemo.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Demo the knapsack functionality in knapsack.py # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Demo the knapsack functionality in knapsack.py. This is a little test program that shows how the various knapsack algorithms work. Use 'python knapsackdemo.py' to run the program. The usage is:: Usage: knapsackdemo.py dir capacity Tests various knapsack (fit) algorithms on dir, using capacity (in MB) as the target fill point. You'll get a good feel for how it works using something like this:: python knapsackdemo.py /usr/bin 35 The output should look fine on an 80-column display. On my Duron 850 with 784MB of RAM (Linux 2.6, Python 2.3), this runs in 0.360 seconds of elapsed time (neglecting the time required to build the list of files to fit). A bigger, nastier test is to build a 650 MB list out of / or /usr. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## import sys import os import time from CedarBackup2.filesystem import BackupFileList from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit BYTES_PER_KBYTE = 1024.0 KBYTES_PER_MBYTE = 1024.0 BYTES_PER_MBYTE = BYTES_PER_KBYTE * KBYTES_PER_MBYTE ################## # main() function ################## def main(): """Main routine.""" # Check arguments if len(sys.argv) != 3: print "Usage: %s dir capacity" % sys.argv[0] print "Tests various knapsack (fit) algorithms on dir, using" print "capacity (in MB) as the target fill point." sys.exit(1) searchDir = sys.argv[1] capacity = float(sys.argv[2]) # Print a starting banner print "" print "==============================================================" print "KNAPSACK TEST PROGRAM" print "==============================================================" print "" print "This program tests various knapsack (fit) algorithms using" print "a list of files gathered from a directory. The algorithms" print "attempt to fit the files into a finite sized \"disc\"." print "" print "Each algorithm runs on a list with the same contents, although" print "the actual function calls are provided with a copy of the" print "original list, so they may use their list destructively." print "" print "==============================================================" print "" # Get information about the search directory start = time.time() start = time.time() files = BackupFileList() files.addDirContents(searchDir) size = files.totalSize() size /= BYTES_PER_MBYTE end = time.time() # Generate a table mapping file to size as needed by the knapsack algorithms table = { } for entry in files: if os.path.islink(entry): table[entry] = (entry, 0.0) elif os.path.isfile(entry): table[entry] = (entry, float(os.stat(entry).st_size)) # Print some status information about what we're doing print "Note: desired capacity is %.2f MB." % capacity print "The search path, %s, contains about %.2f MB in %d files." % (searchDir, size, len(files)) print "Gathering this information took about %.3f seconds." % (end - start) print "" # Define the list of tests # (These are function pointers, essentially.) tests = { 'FIRST FIT': firstFit, ' BEST FIT': bestFit, 'WORST FIT': worstFit, ' ALT FIT': alternateFit } # Run each test totalElapsed = 0.0 for key in tests.keys(): # Run and time the test start = time.time() (items, used) = tests[key](table.copy(), capacity*BYTES_PER_MBYTE) end = time.time() count = len(items) # Calculate derived values countPercent = (float(count)/float(len(files))) * 100.0 usedPercent = (float(used)/(float(capacity)*BYTES_PER_MBYTE)) * 100.0 elapsed = end - start totalElapsed += elapsed # Display the results print "%s: %5d files (%6.2f%%), %6.2f MB (%6.2f%%), elapsed: %8.5f sec" % ( key, count, countPercent, used/BYTES_PER_MBYTE, usedPercent, elapsed) # And, print the total elapsed time print "\nTotal elapsed processing time was about %.3f seconds." % totalElapsed ######################################################################## # Module entry point ######################################################################## # Run the main routine if the module is executed rather than sourced if __name__ == '__main__': main() CedarBackup2-2.22.0/util/test.py0000775000175000017500000002461712122614200020056 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: test.py 999 2010-07-07 19:58:25Z pronovic $ # Purpose : Run all of the unit tests for the project. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Run the CedarBackup2 unit tests. This script runs all of the unit tests at once so we can get one big success or failure result, rather than 20 different smaller results that we somehow have to aggregate together to get the "big picture". This is done by creating and running one big unit test suite based on the suites in the individual unit test modules. The composite suite is always run using the TextTestRunner at verbosity level 1, which prints one dot (".") on the screen for each test run. This output is the same as one would get when using unittest.main() in an individual test. Generally, I'm trying to keep all of the "special" validation logic (i.e. did we find the right Python, did we find the right libraries, etc.) in this code rather than in the individual unit tests so they're more focused on what to test than how their environment should be configured. We want to make sure the tests use the modules in the current source tree, not any versions previously-installed elsewhere, if possible. We don't actually import the modules here, but we warn if the wrong ones would be found. We also want to make sure we are running the correct 'test' package - not one found elsewhere on the user's path - since 'test' could be a relatively common name for a package. Most people will want to run the script with no arguments. This will result in a "reduced feature set" test suite that covers all of the available test suites, but executes only those tests with no surprising system, kernel or network dependencies. If "full" is specified as one of the command-line arguments, then all of the unit tests will be run, including those that require a specialized environment. For instance, some tests require remote connectivity, a loopback filesystem, etc. Other arguments on the command line are assumed to be named tests, so for instance passing "config" runs only the tests for config.py. Any number of individual tests may be listed on the command line, and unknown values will simply be ignored. @note: Even if you run this test with the C{python2.5} interpreter, some of the individual unit tests require the C{python} interpreter. In particular, the utility tests (in test/utiltests.py) use brief Python script snippets with known results to verify the behavior of C{executeCommand}. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import logging import unittest ################## # main() function ################## def main(): """ Main routine for program. @return: Integer 0 upon success, integer 1 upon failure. """ # Check the Python version. We require 2.5 or greater. try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5]: print "Python version 2.5 or greater required, sorry." return 1 except: # sys.version_info isn't available before 2.0 print "Python version 2.5 or greater required, sorry." return 1 # Check for the correct CedarBackup2 location and import utilities try: if os.path.exists(os.path.join(".", "CedarBackup2", "filesystem.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "CedarBackup2", "filesystem.py")): sys.path.insert(0, "..") else: print "WARNING: CedarBackup2 modules were not found in the expected" print "location. If the import succeeds, you may be using an" print "unexpected version of CedarBackup2." print "" from CedarBackup2.util import nullDevice, Diagnostics except ImportError, e: print "Failed to import CedarBackup2 util module: %s" % e print "You must either run the unit tests from the CedarBackup2 source" print "tree, or properly set the PYTHONPATH enviroment variable." return 1 # Setup platform-specific command overrides from CedarBackup2.testutil import setupOverrides setupOverrides() # Import the unit test modules try: if os.path.exists(os.path.join(".", "testcase", "filesystemtests.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "testcase", "filesystemtests.py")): sys.path.insert(0, "..") else: print "WARNING: CedarBackup2 unit test modules were not found in" print "the expected location. If the import succeeds, you may be" print "using an unexpected version of the test suite." print "" from testcase import utiltests from testcase import knapsacktests from testcase import filesystemtests from testcase import peertests from testcase import actionsutiltests from testcase import writersutiltests from testcase import cdwritertests from testcase import dvdwritertests from testcase import configtests from testcase import clitests from testcase import mysqltests from testcase import postgresqltests from testcase import subversiontests from testcase import mboxtests from testcase import encrypttests from testcase import splittests from testcase import spantests from testcase import capacitytests from testcase import customizetests except ImportError, e: print "Failed to import CedarBackup2 unit test module: %s" % e print "You must either run the unit tests from the CedarBackup2 source" print "tree, or properly set the PYTHONPATH enviroment variable." return 1 # Set up logging to discard everything devnull = nullDevice() handler = logging.FileHandler(filename=devnull) handler.setLevel(logging.NOTSET) logger = logging.getLogger("CedarBackup2") logger.setLevel(logging.NOTSET) logger.addHandler(handler) # Get a list of program arguments args = sys.argv[1:] # Set flags in the environment to control tests if "full" in args: full = True os.environ["PEERTESTS_FULL"] = "Y" os.environ["WRITERSUTILTESTS_FULL"] = "Y" os.environ["ENCRYPTTESTS_FULL"] = "Y" os.environ["SPLITTESTS_FULL"] = "Y" args.remove("full") # remainder of list will be specific tests to run, if any else: full = False os.environ["PEERTESTS_FULL"] = "N" os.environ["WRITERSUTILTESTS_FULL"] = "N" os.environ["ENCRYPTTESTS_FULL"] = "N" os.environ["SPLITTESTS_FULL"] = "N" # Print a starting banner print "\n*** Running CedarBackup2 unit tests." if not full: print "*** Using reduced feature set suite with minimum system requirements." # Make a list of tests to run unittests = { } if args == [] or "util" in args: unittests["util"] = utiltests.suite() if args == [] or "knapsack" in args: unittests["knapsack"] = knapsacktests.suite() if args == [] or "filesystem" in args: unittests["filesystem"] = filesystemtests.suite() if args == [] or "peer" in args: unittests["peer"] = peertests.suite() if args == [] or "actionsutil" in args: unittests["actionsutil"] = actionsutiltests.suite() if args == [] or "writersutil" in args: unittests["writersutil"] = writersutiltests.suite() if args == [] or "cdwriter" in args: unittests["cdwriter"] = cdwritertests.suite() if args == [] or "dvdwriter" in args: unittests["dvdwriter"] = dvdwritertests.suite() if args == [] or "config" in args: unittests["config"] = configtests.suite() if args == [] or "cli" in args: unittests["cli"] = clitests.suite() if args == [] or "mysql" in args: unittests["mysql"] = mysqltests.suite() if args == [] or "postgresql" in args: unittests["postgresql"] = postgresqltests.suite() if args == [] or "subversion" in args: unittests["subversion"] = subversiontests.suite() if args == [] or "mbox" in args: unittests["mbox"] = mboxtests.suite() if args == [] or "split" in args: unittests["split"] = splittests.suite() if args == [] or "encrypt" in args: unittests["encrypt"] = encrypttests.suite() if args == [] or "span" in args: unittests["span"] = spantests.suite() if args == [] or "capacity" in args: unittests["capacity"] = capacitytests.suite() if args == [] or "customize" in args: unittests["customize"] = customizetests.suite() if args != []: print "*** Executing specific tests: %s" % unittests.keys() # Print some diagnostic information print "" Diagnostics().printDiagnostics(prefix="*** ") # Create and run the test suite print "" suite = unittest.TestSuite(unittests.values()) suiteResult = unittest.TextTestRunner(verbosity=1).run(suite) print "" if not suiteResult.wasSuccessful(): return 1 else: return 0 ######################################################################## # Module entry point ######################################################################## # Run the main routine if the module is executed rather than sourced if __name__ == '__main__': result = main() sys.exit(result) CedarBackup2-2.22.0/cback0000775000175000017500000000172411415155732016546 0ustar pronovicpronovic00000000000000#!/usr/bin/python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: cback 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Implements Cedar Backup cback script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback script. @author: Kenneth J. Pronovici """ try: import sys from CedarBackup2.cli import cli except ImportError, e: print "Failed to import Python modules: %s" % e print "Are you running a proper version of Python?" sys.exit(1) result = cli() sys.exit(result) CedarBackup2-2.22.0/testcase/0002775000175000017500000000000012143054372017363 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/testcase/mboxtests.py0000664000175000017500000024021411415165677022002 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: mboxtests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests mbox extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/mbox.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mbox.py. There are also tests for several of the private methods. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MBOXTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.mbox import LocalConfig, MboxConfig, MboxFile, MboxDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mbox.conf.1", "mbox.conf.2", "mbox.conf.3", "mbox.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ##################### # TestMboxFile class ##################### class TestMboxFile(unittest.TestCase): """Tests for the MboxFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxFile = MboxFile("/path/to/it", "daily", "gzip") self.failUnlessEqual("/path/to/it", mboxFile.absolutePath) self.failUnlessEqual("daily", mboxFile.collectMode) self.failUnlessEqual("gzip", mboxFile.compressMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxFile = MboxFile(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", mboxFile.absolutePath) mboxFile.absolutePath = None self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) mboxFile.absolutePath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", mboxFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "") self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "relative/path") self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxFile = MboxFile(collectMode="daily") self.failUnlessEqual("daily", mboxFile.collectMode) mboxFile.collectMode = None self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) mboxFile.collectMode = "daily" self.failUnlessEqual("daily", mboxFile.collectMode) mboxFile.collectMode = "weekly" self.failUnlessEqual("weekly", mboxFile.collectMode) mboxFile.collectMode = "incr" self.failUnlessEqual("incr", mboxFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "") self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "monthly") self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxFile = MboxFile(compressMode="gzip") self.failUnlessEqual("gzip", mboxFile.compressMode) mboxFile.compressMode = None self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) mboxFile.compressMode = "none" self.failUnlessEqual("none", mboxFile.compressMode) mboxFile.compressMode = "bzip2" self.failUnlessEqual("bzip2", mboxFile.compressMode) mboxFile.compressMode = "gzip" self.failUnlessEqual("gzip", mboxFile.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "") self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "compress") self.failUnlessEqual(None, mboxFile.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxFile1 = MboxFile() mboxFile2 = MboxFile() self.failUnlessEqual(mboxFile1, mboxFile2) self.failUnless(mboxFile1 == mboxFile2) self.failUnless(not mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(mboxFile1 >= mboxFile2) self.failUnless(not mboxFile1 != mboxFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "daily", "gzip") self.failUnlessEqual(mboxFile1, mboxFile2) self.failUnless(mboxFile1 == mboxFile2) self.failUnless(not mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(mboxFile1 >= mboxFile2) self.failUnless(not mboxFile1 != mboxFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(absolutePath="/zippy") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/zippy", "daily", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(collectMode="incr") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "incr", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(compressMode="gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxFile1 = MboxFile("/path", "daily", "bzip2") mboxFile2 = MboxFile("/path", "daily", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) ##################### # TestMboxDir class ##################### class TestMboxDir(unittest.TestCase): """Tests for the MboxDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessEqual(None, mboxDir.relativeExcludePaths) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxDir = MboxDir("/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*SPAM.*", ] ) self.failUnlessEqual("/path/to/it", mboxDir.absolutePath) self.failUnlessEqual("daily", mboxDir.collectMode) self.failUnlessEqual("gzip", mboxDir.compressMode) self.failUnlessEqual([ "whatever", ], mboxDir.relativeExcludePaths) self.failUnlessEqual([ ".*SPAM.*", ], mboxDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxDir = MboxDir(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", mboxDir.absolutePath) mboxDir.absolutePath = None self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) mboxDir.absolutePath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", mboxDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "") self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "relative/path") self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxDir = MboxDir(collectMode="daily") self.failUnlessEqual("daily", mboxDir.collectMode) mboxDir.collectMode = None self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) mboxDir.collectMode = "daily" self.failUnlessEqual("daily", mboxDir.collectMode) mboxDir.collectMode = "weekly" self.failUnlessEqual("weekly", mboxDir.collectMode) mboxDir.collectMode = "incr" self.failUnlessEqual("incr", mboxDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "") self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "monthly") self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxDir = MboxDir(compressMode="gzip") self.failUnlessEqual("gzip", mboxDir.compressMode) mboxDir.compressMode = None self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) mboxDir.compressMode = "none" self.failUnlessEqual("none", mboxDir.compressMode) mboxDir.compressMode = "bzip2" self.failUnlessEqual("bzip2", mboxDir.compressMode) mboxDir.compressMode = "gzip" self.failUnlessEqual("gzip", mboxDir.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "") self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "compress") self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_015(self): """ Test assignment of relativeExcludePaths attribute, None value. """ mboxDir = MboxDir(relativeExcludePaths=[]) self.failUnlessEqual([], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = None self.failUnlessEqual(None, mboxDir.relativeExcludePaths) def testConstructor_016(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = [] self.failUnlessEqual([], mboxDir.relativeExcludePaths) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], mboxDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of excludePatterns attribute, None value. """ mboxDir = MboxDir(excludePatterns=[]) self.failUnlessEqual([], mboxDir.excludePatterns) mboxDir.excludePatterns = None self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_020(self): """ Test assignment of excludePatterns attribute, [] value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = [] self.failUnlessEqual([], mboxDir.excludePatterns) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, single valid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], mboxDir.excludePatterns) mboxDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], mboxDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], mboxDir.excludePatterns) mboxDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], mboxDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "*" ]) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "valid" ]) self.failUnlessEqual(None, mboxDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxDir1 = MboxDir() mboxDir2 = MboxDir() self.failUnlessEqual(mboxDir1, mboxDir2) self.failUnless(mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(not mboxDir1 != mboxDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "daily", "gzip") self.failUnlessEqual(mboxDir1, mboxDir2) self.failUnless(mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(not mboxDir1 != mboxDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(absolutePath="/zippy") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/zippy", "daily", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(collectMode="incr") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "incr", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(compressMode="gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxDir1 = MboxDir("/path", "daily", "bzip2") mboxDir2 = MboxDir("/path", "daily", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_009(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=[]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_010(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=["stuff", "other", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_011(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], []) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(not mboxDir1 <= mboxDir2) self.failUnless(mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_012(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", ["two", ], []) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_013(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=[]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_014(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=["one", "two", "three", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["pattern", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], ["p1", ]) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["p2", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) ####################### # TestMboxConfig class ####################### class TestMboxConfig(unittest.TestCase): """Tests for the MboxConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) self.failUnlessEqual(None, mbox.compressMode) self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, mboxFiles=None and mboxDirs=None. """ mbox = MboxConfig("daily", "gzip", None, None) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no mboxFiles, no mboxDirs. """ mbox = MboxConfig("daily", "gzip", [], []) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual([], mbox.mboxFiles) self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one mboxFile, no mboxDirs. """ mboxFiles = [ MboxFile(), ] mbox = MboxConfig("daily", "gzip", mboxFiles, []) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(mboxFiles, mbox.mboxFiles) self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with no mboxFiles, one mboxDir. """ mboxDirs = [ MboxDir(), ] mbox = MboxConfig("daily", "gzip", [], mboxDirs) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual([], mbox.mboxFiles) self.failUnlessEqual(mboxDirs, mbox.mboxDirs) def testConstructor_006(self): """ Test constructor with all values filled in, with valid values, with multiple mboxFiles and mboxDirs. """ mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] mboxDirs = [ MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ] mbox = MboxConfig("daily", "gzip", mboxFiles=mboxFiles, mboxDirs=mboxDirs) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(mboxFiles, mbox.mboxFiles) self.failUnlessEqual(mboxDirs, mbox.mboxDirs) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mbox = MboxConfig(collectMode="daily") self.failUnlessEqual("daily", mbox.collectMode) mbox.collectMode = None self.failUnlessEqual(None, mbox.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) mbox.collectMode = "weekly" self.failUnlessEqual("weekly", mbox.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) self.failUnlessAssignRaises(ValueError, mbox, "collectMode", "") self.failUnlessEqual(None, mbox.collectMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, None value. """ mbox = MboxConfig(compressMode="gzip") self.failUnlessEqual("gzip", mbox.compressMode) mbox.compressMode = None self.failUnlessEqual(None, mbox.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, valid value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.compressMode) mbox.compressMode = "bzip2" self.failUnlessEqual("bzip2", mbox.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.compressMode) self.failUnlessAssignRaises(ValueError, mbox, "compressMode", "") self.failUnlessEqual(None, mbox.compressMode) def testConstructor_013(self): """ Test assignment of mboxFiles attribute, None value. """ mbox = MboxConfig(mboxFiles=[]) self.failUnlessEqual([], mbox.mboxFiles) mbox.mboxFiles = None self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_014(self): """ Test assignment of mboxFiles attribute, [] value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [] self.failUnlessEqual([], mbox.mboxFiles) def testConstructor_015(self): """ Test assignment of mboxFiles attribute, single valid entry. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(), ] self.failUnlessEqual([ MboxFile(), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="daily")) self.failUnlessEqual([ MboxFile(), MboxFile(collectMode="daily"), ], mbox.mboxFiles) def testConstructor_016(self): """ Test assignment of mboxFiles attribute, multiple valid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] self.failUnlessEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="incr")) self.failUnlessEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), MboxFile(collectMode="incr"), ], mbox.mboxFiles) def testConstructor_017(self): """ Test assignment of mboxFiles attribute, single invalid entry (None). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [None, ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_018(self): """ Test assignment of mboxFiles attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxDir(), ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_019(self): """ Test assignment of mboxFiles attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxFile(), MboxDir(), ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_020(self): """ Test assignment of mboxDirs attribute, None value. """ mbox = MboxConfig(mboxDirs=[]) self.failUnlessEqual([], mbox.mboxDirs) mbox.mboxDirs = None self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_021(self): """ Test assignment of mboxDirs attribute, [] value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [] self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_022(self): """ Test assignment of mboxDirs attribute, single valid entry. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(), ] self.failUnlessEqual([ MboxDir(), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="daily")) self.failUnlessEqual([ MboxDir(), MboxDir(collectMode="daily"), ], mbox.mboxDirs) def testConstructor_023(self): """ Test assignment of mboxDirs attribute, multiple valid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ] self.failUnlessEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="incr")) self.failUnlessEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ], mbox.mboxDirs) def testConstructor_024(self): """ Test assignment of mboxDirs attribute, single invalid entry (None). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [None, ]) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_025(self): """ Test assignment of mboxDirs attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxFile(), ]) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_026(self): """ Test assignment of mboxDirs attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxDir(), MboxFile(), ]) self.failUnlessEqual(None, mbox.mboxDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mbox1 = MboxConfig() mbox2 = MboxConfig() self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, lists None. """ mbox1 = MboxConfig("daily", "gzip", None, None) mbox2 = MboxConfig("daily", "gzip", None, None) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, lists empty. """ mbox1 = MboxConfig("daily", "gzip", [], []) mbox2 = MboxConfig("daily", "gzip", [], []) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, lists non-empty. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(collectMode="daily") self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ]) mbox2 = MboxConfig("weekly", "gzip", [ MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(compressMode="bzip2") self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mbox1 = MboxConfig("daily", "bzip2", [ MboxFile(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_009(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_010(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_011(self): """ Test comparison of two differing objects, mboxFiles differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", [ ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_012(self): """ Test comparison of two differing objects, mboxFiles differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), MboxFile(), ], None) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_013(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_014(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_015(self): """ Test comparison of two differing objects, mboxDirs differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_016(self): """ Test comparison of two differing objects, mboxDirs differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mbox configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.mbox) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.mbox) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mbox.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mbox attribute, None value. """ config = LocalConfig() config.mbox = None self.failUnlessEqual(None, config.mbox) def testConstructor_005(self): """ Test assignment of mbox attribute, valid value. """ config = LocalConfig() config.mbox = MboxConfig() self.failUnlessEqual(MboxConfig(), config.mbox) def testConstructor_006(self): """ Test assignment of mbox attribute, invalid value (not MboxConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mbox", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mbox = MboxConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mbox differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mbox differs. """ config1 = LocalConfig() config1.mbox = MboxConfig(collectMode="daily") config2 = LocalConfig() config2.mbox = MboxConfig(collectMode="weekly") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mbox section. """ config = LocalConfig() config.mbox = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mbox section. """ config = LocalConfig() config.mbox = MboxConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mbox section, mboxFiles=None and mboxDirs=None. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty mbox section, mboxFiles=[] and mboxDirs=[]. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", [], []) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_006(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_007(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs self.failUnlessRaises(ValueError, config.validate) def testValidate_009(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, both values on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_010(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, both values on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_011(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_012(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_013(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_014(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_015(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_016(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_017(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_018(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode and compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_020(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode and compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mbox.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.mbox) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.mbox) def testParse_002(self): """ Parse config document with default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail"), ] path = self.resources["mbox.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("daily", config.mbox.collectMode) self.failUnlessEqual("gzip", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("daily", config.mbox.collectMode) self.failUnlessEqual("gzip", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) def testParse_003(self): """ Parse config document with no default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2"), ] path = self.resources["mbox.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual(None, config.mbox.collectMode) self.failUnlessEqual(None, config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual(None, config.mbox.collectMode) self.failUnlessEqual(None, config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) def testParse_004(self): """ Parse config document with default modes, several files with various overrides and exclusions. """ mboxFiles = [] mboxFile = MboxFile(absolutePath="/home/jimbo/mail/cedar-backup-users") mboxFiles.append(mboxFile) mboxFile = MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip") mboxFiles.append(mboxFile) mboxDirs = [] mboxDir = MboxDir(absolutePath="/home/frank/mail/cedar-backup-users") mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/jimbob/mail", compressMode="bzip2", relativeExcludePaths=["logomachy-devel"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2", excludePatterns=[".*SPAM.*"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billybob/mail", relativeExcludePaths=["debian-devel", "debian-python", ], excludePatterns=[".*SPAM.*", ".*JUNK.*", ]) mboxDirs.append(mboxDir) path = self.resources["mbox.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("incr", config.mbox.collectMode) self.failUnlessEqual("none", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("incr", config.mbox.collectMode) self.failUnlessEqual("none", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ mbox = MboxConfig() config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single mbox file with no optional values. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single mbox directory with no optional values. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_006(self): """ Test with defaults set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_007(self): """ Test with defaults set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_008(self): """ Test with defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_009(self): """ Test with defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_010(self): """ Test with no defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_011(self): """ Test with no defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_012(self): """ Test with compressMode set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_013(self): """ Test with compressMode set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_014(self): """ Test with collectMode set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_015(self): """ Test with collectMode set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_016(self): """ Test with compressMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_017(self): """ Test with compressMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_018(self): """ Test with collectMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_019(self): """ Test with collectMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_020(self): """ Test with defaults set, single mbox directory with relativeExcludePaths set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", relativeExcludePaths=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_021(self): """ Test with defaults set, single mbox directory with excludePatterns set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", excludePatterns=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_022(self): """ Test with defaults set, multiple mbox files and directories with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path1", collectMode="daily", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path2", collectMode="weekly", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path3", collectMode="incr", compressMode="gzip")) mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path1", collectMode="daily", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path2", collectMode="weekly", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path3", collectMode="incr", compressMode="bzip2")) mbox = MboxConfig(collectMode="incr", compressMode="bzip2", mboxFiles=mboxFiles, mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMboxFile, 'test'), unittest.makeSuite(TestMboxDir, 'test'), unittest.makeSuite(TestMboxConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/cdwritertests.py0000664000175000017500000023714511415165677022671 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: cdwritertests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests CD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/cdwriter.py. This code was consolidated from writertests.py and imagetests.py at the same time cdwriter.py was created. Code Coverage ============= This module contains individual tests for the public classes implemented in cdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause CD media to be written to. As a compromise, much of the implementation is in terms of private static methods that have well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing the private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. I used to try and run tests against an actual device, to make sure that this worked. However, those tests ended up being kind of bogus, because my main development environment doesn't have a writer, and even if it had one, any device with the same name on another user's system wouldn't necessarily return sensible results. That's just pointless. We'll just have to rely on the other tests to make sure that things seem sensible. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80 ####################################################################### # Module-wide configuration and constants ####################################################################### MB650 = (650.0*1024.0*1024.0) # 650 MB MB700 = (700.0*1024.0*1024.0) # 700 MB ILEAD = (11400.0*2048.0) # Initial lead-in SLEAD = (6900.0*2048.0) # Session lead-in DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.failUnlessRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_CDR_74} media type. """ media = MediaDefinition(MEDIA_CDR_74) self.failUnlessEqual(MEDIA_CDR_74, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(332800, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_CDRW_74} media type. """ media = MediaDefinition(MEDIA_CDRW_74) self.failUnlessEqual(MEDIA_CDRW_74, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(332800, media.capacity) def testConstructor_004(self): """ Test the constructor with the C{MEDIA_CDR_80} media type. """ media = MediaDefinition(MEDIA_CDR_80) self.failUnlessEqual(MEDIA_CDR_80, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(358400, media.capacity) def testConstructor_005(self): """ Test the constructor with the C{MEDIA_CDRW_80} media type. """ media = MediaDefinition(MEDIA_CDRW_80) self.failUnlessEqual(MEDIA_CDRW_80, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(358400, media.capacity) ############################ # TestMediaCapacity class ############################ class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor. """ capacity = MediaCapacity(100, 200, (300, 400)) self.failUnlessEqual(100, capacity.bytesUsed) self.failUnlessEqual(200, capacity.bytesAvailable) self.failUnlessEqual((300, 400), capacity.boundaries) ##################### # TestCdWriter class ##################### class TestCdWriter(unittest.TestCase): """Tests for the CdWriter class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid non-ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True} """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_002(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATA:0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("ATA:0,0,0", writer.scsiId) self.failUnlessEqual("ATA:0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_003(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATAPI SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATAPI:0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("ATAPI:0,0,0", writer.scsiId) self.failUnlessEqual("ATAPI:0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_004(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=False) def testConstructor_005(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=True) def testConstructor_006(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=False) def testConstructor_007(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=True) def testConstructor_008(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/bogus", scsiId="0,0,0", unittest=False) def testConstructor_009(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/bogus", scsiId="0,0,0", unittest=True) self.failUnlessEqual("/bogus", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_010(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=False) def testConstructor_011(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=True) def testConstructor_012(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 1 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=1, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(1, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_013(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 5 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=5, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(5, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_014(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=False) def testConstructor_015(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=True) def testConstructor_016(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_74, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDR_74, writer.media.mediaType) self.failUnlessEqual(False, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_017(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_74, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_018(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDR_80, writer.media.mediaType) self.failUnlessEqual(False, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_019(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_020(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual(None, writer.scsiId) self.failUnlessEqual("/dev/null", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_021(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. Use C{noEject=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, noEject=True, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual(None, writer.scsiId) self.failUnlessEqual("/dev/null", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(True, writer._noEject) #################################### # Test the capacity-related methods #################################### def testCapacity_001(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_002(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_003(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_004(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_005(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_006(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_007(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) # 700 MB - lead-in - 1 sector self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_008(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_009(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_010(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_011(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_012(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_013(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_014(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_015(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_016(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) # 650 MB minus lead-in self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_017(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_018(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_019(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_020(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_021(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_022(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_023(self): """ Test _calculateCapacity for boundaries of (321342, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.2. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (321342, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((321342, 330042), capacity.boundaries) def testCapacity_024(self): """ Test _calculateCapacity for boundaries of (0, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.3. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 330042), capacity.boundaries) ######################################### # Test methods that build argument lists ######################################### def testBuildArgs_001(self): """ Test _buildOpenTrayArgs(). """ args = CdWriter._buildOpenTrayArgs(device="/dev/stuff") self.failUnlessEqual(["/dev/stuff", ], args) def testBuildArgs_002(self): """ Test _buildCloseTrayArgs(). """ args = CdWriter._buildCloseTrayArgs(device="/dev/stuff") self.failUnlessEqual(["-t", "/dev/stuff", ], args) def testBuildArgs_003(self): """ Test _buildPropertiesArgs(). """ args = CdWriter._buildPropertiesArgs(hardwareId="0,0,0") self.failUnlessEqual(["-prcap", "dev=0,0,0", ], args) def testBuildArgs_004(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATA:0,0,0") self.failUnlessEqual(["-msinfo", "dev=ATA:0,0,0", ], args) def testBuildArgs_005(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATAPI:0,0,0") self.failUnlessEqual(["-msinfo", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_006(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:0,0,0") self.failUnlessEqual(["-v", "blank=fast", "dev=ATA:0,0,0", ], args) def testBuildArgs_007(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:0,0,0") self.failUnlessEqual(["-v", "blank=fast", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_008(self): """ Test _buildBlankArgs(), with None for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=None) self.failUnlessEqual(["-v", "blank=fast", "dev=0,0,0", ], args) def testBuildArgs_009(self): """ Test _buildBlankArgs(), with 1 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=1) self.failUnlessEqual(["-v", "blank=fast", "speed=1", "dev=0,0,0", ], args) def testBuildArgs_010(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:1,2,3", driveSpeed=5) self.failUnlessEqual(["-v", "blank=fast", "speed=5", "dev=ATA:1,2,3", ], args) def testBuildArgs_011(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:1,2,3", driveSpeed=5) self.failUnlessEqual(["-v", "blank=fast", "speed=5", "dev=ATAPI:1,2,3", ], args) def testBuildArgs_012(self): """ Test _buildWriteArgs(), default drive speed and writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever") self.failUnlessEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_013(self): """ Test _buildWriteArgs(), None for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=True) self.failUnlessEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_014(self): """ Test _buildWriteArgs(), None for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=False) self.failUnlessEqual(["-v", "dev=0,0,0", "-data", "/whatever" ], args) def testBuildArgs_015(self): """ Test _buildWriteArgs(), 1 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=1, writeMulti=True) self.failUnlessEqual(["-v", "speed=1", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_016(self): """ Test _buildWriteArgs(), 5 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,1,2", imagePath="/whatever", driveSpeed=5, writeMulti=True) self.failUnlessEqual(["-v", "speed=5", "dev=0,1,2", "-multi", "-data", "/whatever" ], args) def testBuildArgs_017(self): """ Test _buildWriteArgs(), 1 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/dvl/stuff/whatever/more", driveSpeed=1, writeMulti=False) self.failUnlessEqual(["-v", "speed=1", "dev=0,0,0", "-data", "/dvl/stuff/whatever/more" ], args) def testBuildArgs_018(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATA:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.failUnlessEqual(["-v", "speed=5", "dev=ATA:1,2,3", "-data", "/whatever" ], args) def testBuildArgs_019(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATAPI:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.failUnlessEqual(["-v", "speed=5", "dev=ATAPI:1,2,3", "-data", "/whatever" ], args) ########################################## # Test methods that parse cdrecord output ########################################## def testParseOutput_001(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example. """ output = [ "268582,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_002(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra whitespace around the values. """ output = [ " 268582 , 302230 \n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_003(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage after the first line. """ output = [ "268582,302230\n", "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_004(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage before the first line. """ output = [ "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", "268582,302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_005(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative. """ output = [ "-268582,302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_006(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to negative. """ output = [ "268582,-302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_007(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero. """ output = [ "0,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((0, 302230), boundaries) def testParseOutput_008(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to zero. """ output = [ "268582,0\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 0), boundaries) def testParseOutput_009(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative and second value converted to zero. """ output = [ "-268582,0\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_010(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero and second value converted to negative. """ output = [ "0,-302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_011(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_012(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including only stdout. """ output = ['Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_013(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device type removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_014(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device vendor removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_015(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device id removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_016(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, buffer size removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_017(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "supports multi" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_018(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "has tray" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_019(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "can eject" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) def testParseOutput_020(self): """ Test _parsePropertiesOutput() for nonsensical data, just a bunch of empty lines. """ output = [ '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) def testParseOutput_021(self): """ Test _parsePropertiesOutput() for nonsensical data, just an empty list. """ output = [ ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMediaDefinition, 'test'), unittest.makeSuite(TestMediaCapacity, 'test'), unittest.makeSuite(TestCdWriter, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/peertests.py0000664000175000017500000017425611415165677022004 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: peertests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests peer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/peer.py. Code Coverage ============= This module contains individual tests for most of the public functions and classes implemented in peer.py, including the C{LocalPeer} and C{RemotePeer} classes. Unfortunately, some of the code can't be tested. In particular, the stage code allows the caller to change ownership on files. Generally, this can only be done by root, and most people won't be running these tests as root. As such, we can't test this functionality. There are also some other pieces of functionality that can only be tested in certain environments (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set PEERTESTS_FULL to "Y" in the environment. In this module, network-related testing is what causes us our biggest problems. In order to test the RemotePeer, we need a "remote" host that we can rcp to and from. We want to fall back on using localhost and the current user, but that might not be safe or appropriate. As such, we'll only run these tests if PEERTESTS_FULL is set to "Y" in the environment. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import os import stat import unittest import tempfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import getMaskAsMode, getLogin, runningAsRoot, failUnlessAssignRaises from CedarBackup2.testutil import platformSupportsPermissions, platformWindows, platformCygwin from CedarBackup2.peer import LocalPeer, RemotePeer from CedarBackup2.peer import DEF_RCP_COMMAND, DEF_RSH_COMMAND from CedarBackup2.peer import DEF_COLLECT_INDICATOR, DEF_STAGE_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree9.tar.gz", ] REMOTE_HOST = "localhost" # Always use login@localhost as our "remote" host NONEXISTENT_FILE = "bogus" # This file name should never exist NONEXISTENT_HOST = "hostname.invalid" # RFC 2606 reserves the ".invalid" TLD for "obviously invalid" names NONEXISTENT_USER = "unittestuser" # This user name should never exist on localhost NONEXISTENT_CMD = "/bogus/~~~ZZZZ/bad/not/there" # This command should never exist in the filesystem ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "PEERTESTS_FULL" in os.environ: return os.environ["PEERTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ########################### # Test basic functionality ########################### def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect directory. """ name = "peer1" collectDir = "whatever/something/else/not/absolute" self.failUnlessRaises(ValueError, LocalPeer, name, collectDir) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(ignoreFailureMode, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, with spaces in the collect directory path. """ name = "peer1" collectDir = "/ absolute / path/ name " peer = LocalPeer(name, collectDir) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) def testBasic_004(self): """ Make sure assignment works for all valid failure modes. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.failUnlessEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.failUnlessEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.failUnless(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) os.chmod(collectDir, 0777) # so we can remove it safely def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator=NONEXISTENT_FILE) self.failUnlessEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", "different", ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different") self.failUnlessEqual(True, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does exist, with spaces in the collect directory path. """ name = "peer1" collectDir = self.buildPath(["collect directory here", ]) collectIndicator = self.buildPath(["collect directory here", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, with spaces in the collect directory path and collect indicator file name. """ name = "peer1" if platformWindows() or platformCygwin(): # os.listdir has problems with trailing spaces collectDir = self.buildPath([" collect dir", ]) collectIndicator = self.buildPath([" collect dir", "different, file", ]) else: collectDir = self.buildPath([" collect dir ", ]) collectIndicator = self.buildPath([" collect dir ", "different, file", ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different, file") self.failUnlessEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.failUnless(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with non-writable collect directory. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with non-writable collect directory, custom name. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator, stageIndicator="something") os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_004(self): """ Attempt to write stage indicator in a valid directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", "whatever", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator="whatever") self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory, with spaces in the directory name. """ name = "peer1" collectDir = self.buildPath(["collect from this directory", ]) stageIndicator = self.buildPath(["collect from this directory", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name, with spaces in the directory name and the file name. """ name = "peer1" collectDir = self.buildPath(["collect ME", ]) stageIndicator = self.buildPath(["collect ME", " whatever-it-takes you", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator=" whatever-it-takes you") self.failUnless(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(not os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely def testStagePeer_003(self): """ Attempt to stage files with non-absolute target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = "this/is/not/absolute" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-writable target directory. """ if not runningAsRoot(): # root doesn't get this error self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1"]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(targetDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(targetDir, 0777) # so we can remove it safely self.failUnlessEqual(0, len(os.listdir(targetDir))) def testStagePeer_006(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_007(self): """ Attempt to stage files with empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) if platformWindows(): targetDir = self.buildPath([" target directory", ]) # os.listdir has problems with trailing spaces else: targetDir = self.buildPath([" target directory ", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_008(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_009(self): """ Attempt to stage files with non-empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target directory place", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with non-empty collect directory containing links and directories. """ self.extractTar("tree9") name = "peer1" collectDir = self.buildPath(["tree9", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ if platformSupportsPermissions(): self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) if getMaskAsMode() == 0400: permissions = 0642 # arbitrary, but different than umask would give else: permissions = 0400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) self.failUnlessEqual(permissions, self.getFileMode(["target", "file001", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file002", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file003", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file004", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file005", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file006", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file007", ])) ###################### # TestRemotePeer class ###################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Tests basic functionality ############################ def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect or working directory. """ name = REMOTE_HOST collectDir = "whatever/something/else/not/absolute" workingDir = "/tmp" remoteUser = getLogin() self.failUnlessRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) name = REMOTE_HOST collectDir = "/whatever/something/else/not/absolute" workingDir = "tmp" remoteUser = getLogin() self.failUnlessRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) self.failUnlessEqual(None, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_004(self): """ Make sure attributes are set properly for valid constructor input, custom rcp command. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() rcpCommand = "rcp -one --two three \"four five\" 'six seven' eight" peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(rcpCommand, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(["rcp", "-one", "--two", "three", "four five", "'six", "seven'", "eight", ], peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_005(self): """ Make sure attributes are set properly for valid constructor input, custom local user command. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() localUser = "pronovic" peer = RemotePeer(name, collectDir, workingDir, remoteUser, localUser=localUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(localUser, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_006(self): """ Make sure attributes are set properly for valid constructor input, custom rsh command. """ name = REMOTE_HOST remoteUser = getLogin() rshCommand = "rsh --whatever -something \"a b\" else" peer = RemotePeer(name, remoteUser=remoteUser, rshCommand=rshCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(None, peer.collectDir) self.failUnlessEqual(None, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(rshCommand, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(["rsh", "--whatever", "-something", "a b", "else", ], peer._rshCommandList) def testBasic_007(self): """ Make sure attributes are set properly for valid constructor input, custom cback command. """ name = REMOTE_HOST remoteUser = getLogin() cbackCommand = "cback --config=whatever --logfile=whatever --mode=064" peer = RemotePeer(name, remoteUser=remoteUser, cbackCommand=cbackCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(None, peer.collectDir) self.failUnlessEqual(None, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(cbackCommand, peer.cbackCommand) def testBasic_008(self): """ Make sure assignment works for all valid failure modes. """ peer = RemotePeer(name="name", remoteUser="user", ignoreFailureMode="all") self.failUnlessEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.failUnlessEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) os.chmod(collectDir, 0777) # so we can remove it safely def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does not exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect directory path", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect directory path", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_009(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" you collect here ", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" you collect here ", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_010(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_011(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", "whatever", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever") self.failUnlessEqual(True, result) def testCheckCollectIndicator_012(self): """ Attempt to check collect indicator collect indicator file that does exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect NOT", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect NOT", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_013(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, where the collect directory and indicator file contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" from here collect!", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" from here collect!", "whatever, dude", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever, dude") self.failUnlessEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_004(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(IOError, peer.writeStageIndicator) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator with non-writable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) self.failUnless(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", "newname", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="newname") self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_008(self): """ Attempt to write stage indicator in a valid directory that contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["with spaces collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["with spaces collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_009(self): """ Attempt to write stage indicator in a valid directory, custom name, where the collect directory and the custom name contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect, soon", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect, soon", "new name with spaces", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="new name with spaces") self.failUnless(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = NONEXISTENT_USER os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_003(self): """ Attempt to stage files with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(not os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely def testStagePeer_006(self): """ Attempt to stage files with non-absolute target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = "non/absolute/target" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_007(self): """ Attempt to stage files with non-existent target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_008(self): """ Attempt to stage files with non-writable target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(targetDir, 0400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely self.failUnlessEqual(0, len(os.listdir(targetDir))) def testStagePeer_009(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with empty collect directory, with a target directory that contains spaces. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target DIR", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_012(self): """ Attempt to stage files with non-empty collect directory, with a target directory that contains spaces. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["write the target here, now!", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_013(self): """ Attempt to stage files with non-empty collect directory containing links and directories. @note: We assume that scp copies the files even though it returns an error due to directories. """ self.extractTar("tree9") name = REMOTE_HOST collectDir = self.buildPath(["tree9", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(2, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) def testStagePeer_014(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) if getMaskAsMode() == 0400: permissions = 0642 # arbitrary, but different than umask would give else: permissions = 0400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) self.failUnlessEqual(permissions, self.getFileMode(["target", "file001", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file002", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file003", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file004", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file005", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file006", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file007", ])) ############################## # Test executeRemoteCommand() ############################## def testExecuteRemoteCommand(self): """ Test that a simple remote command succeeds. """ target = self.buildPath(["test.txt", ]) name = REMOTE_HOST remoteUser = getLogin() command = "touch %s" % target self.failIf(os.path.exists(target)) peer = RemotePeer(name=name, remoteUser=remoteUser) peer.executeRemoteCommand(command) self.failUnless(os.path.exists(target)) ############################ # Test _buildCbackCommand() ############################ def testBuildCbackCommand_001(self): """ Test with None for cbackCommand and action, False for fullBackup. """ self.failUnlessRaises(ValueError, RemotePeer._buildCbackCommand, None, None, False) def testBuildCbackCommand_002(self): """ Test with None for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand(None, "collect", False) self.failUnlessEqual("/usr/bin/cback collect", result) def testBuildCbackCommand_003(self): """ Test with "cback" for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", False) self.failUnlessEqual("cback collect", result) def testBuildCbackCommand_004(self): """ Test with "cback" for cbackCommand, "collect" for action, True for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", True) self.failUnlessEqual("cback --full collect", result) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'testBasic'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/postgresqltests.py0000664000175000017500000011346311415165677023245 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: postgresqltests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests PostgreSQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/postgresql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/postgresql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to PostgreSQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a POSTGRESQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.postgresql import LocalConfig, PostgresqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "postgresql.conf.1", "postgresql.conf.2", "postgresql.conf.3", "postgresql.conf.4", "postgresql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ############################# # TestPostgresqlConfig class ############################# class TestPostgresqlConfig(unittest.TestCase): """Tests for the PostgresqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostgresqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessEqual(False, postgresql.all) self.failUnlessEqual(None, postgresql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ postgresql = PostgresqlConfig("user", "none", False, None) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("none", postgresql.compressMode) self.failUnlessEqual(False, postgresql.all) self.failUnlessEqual(None, postgresql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ postgresql = PostgresqlConfig("user", "none", True, []) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("none", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([], postgresql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ postgresql = PostgresqlConfig("user", "gzip", True, [ "one", ]) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("gzip", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([ "one", ], postgresql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ postgresql = PostgresqlConfig("user", "bzip2", True, [ "one", "two", ]) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("bzip2", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([ "one", "two", ], postgresql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ postgresql = PostgresqlConfig(user="user") self.failUnlessEqual("user", postgresql.user) postgresql.user = None self.failUnlessEqual(None, postgresql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) postgresql.user = "user" self.failUnlessEqual("user", postgresql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) self.failUnlessAssignRaises(ValueError, postgresql, "user", "") self.failUnlessEqual(None, postgresql.user) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ postgresql = PostgresqlConfig(compressMode="none") self.failUnlessEqual("none", postgresql.compressMode) postgresql.compressMode = None self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) postgresql.compressMode = "none" self.failUnlessEqual("none", postgresql.compressMode) postgresql.compressMode = "gzip" self.failUnlessEqual("gzip", postgresql.compressMode) postgresql.compressMode = "bzip2" self.failUnlessEqual("bzip2", postgresql.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "") self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "bogus") self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_013(self): """ Test assignment of all attribute, None value. """ postgresql = PostgresqlConfig(all=True) self.failUnlessEqual(True, postgresql.all) postgresql.all = None self.failUnlessEqual(False, postgresql.all) def testConstructor_014(self): """ Test assignment of all attribute, valid value (real boolean). """ postgresql = PostgresqlConfig() self.failUnlessEqual(False, postgresql.all) postgresql.all = True self.failUnlessEqual(True, postgresql.all) postgresql.all = False self.failUnlessEqual(False, postgresql.all) def testConstructor_015(self): """ Test assignment of all attribute, valid value (expression). """ postgresql = PostgresqlConfig() self.failUnlessEqual(False, postgresql.all) postgresql.all = 0 self.failUnlessEqual(False, postgresql.all) postgresql.all = [] self.failUnlessEqual(False, postgresql.all) postgresql.all = None self.failUnlessEqual(False, postgresql.all) postgresql.all = ['a'] self.failUnlessEqual(True, postgresql.all) postgresql.all = 3 self.failUnlessEqual(True, postgresql.all) def testConstructor_016(self): """ Test assignment of databases attribute, None value. """ postgresql = PostgresqlConfig(databases=[]) self.failUnlessEqual([], postgresql.databases) postgresql.databases = None self.failUnlessEqual(None, postgresql.databases) def testConstructor_017(self): """ Test assignment of databases attribute, [] value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = [] self.failUnlessEqual([], postgresql.databases) def testConstructor_018(self): """ Test assignment of databases attribute, single valid entry. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = ["/whatever", ] self.failUnlessEqual(["/whatever", ], postgresql.databases) postgresql.databases.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], postgresql.databases) def testConstructor_019(self): """ Test assignment of databases attribute, multiple valid entries. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], postgresql.databases) postgresql.databases.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], postgresql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, single invalid entry (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["", ]) self.failUnlessEqual(None, postgresql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["good", "", "alsogood", ]) self.failUnlessEqual(None, postgresql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig() self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ postgresql1 = PostgresqlConfig("user", "gzip", True, None) postgresql2 = PostgresqlConfig("user", "gzip", True, None) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, []) postgresql2 = PostgresqlConfig("user", "bzip2", True, []) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ postgresql1 = PostgresqlConfig("user", "none", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "none", True, [ "whatever", ]) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(user="user") self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ postgresql1 = PostgresqlConfig("user1", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user2", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(compressMode="gzip") self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_009(self): """ Test comparison of two differing objects, all differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(all=True) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_010(self): """ Test comparison of two differing objects, all differs. """ postgresql1 = PostgresqlConfig("user", "gzip", False, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_011(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=[]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_012(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=["whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (both not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", "bogus", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) # note: different than standard due to unsorted list self.failUnless(not postgresql1 <= postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 > postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 >= postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 != postgresql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the postgresql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.postgresql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.postgresql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["postgresql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of postgresql attribute, None value. """ config = LocalConfig() config.postgresql = None self.failUnlessEqual(None, config.postgresql) def testConstructor_005(self): """ Test assignment of postgresql attribute, valid value. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.failUnlessEqual(PostgresqlConfig(), config.postgresql) def testConstructor_006(self): """ Test assignment of postgresql attribute, invalid value (not PostgresqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "postgresql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, postgresql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, postgresql differs. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig(user="one") config2 = LocalConfig() config2.postgresql = PostgresqlConfig(user="two") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None postgresql section. """ config = LocalConfig() config.postgresql = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty postgresql section. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty postgresql section, all=True, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty postgresql section, all=True, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty postgresql section, all=True, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, ["whatever", ]) self.failUnlessRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty postgresql section, all=False, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty postgresql section, all=False, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", False, []) self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty postgresql section, all=False, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty postgresql section, with user=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["postgresql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.postgresql) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.postgresql) def testParse_003(self): """ Parse config document containing only a postgresql section, no databases, all=True. """ path = self.resources["postgresql.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("none", config.postgresql.compressMode) self.failUnlessEqual(True, config.postgresql.all) self.failUnlessEqual(None, config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("none", config.postgresql.compressMode) self.failUnlessEqual(True, config.postgresql.all) self.failUnlessEqual(None, config.postgresql.databases) def testParse_004(self): """ Parse config document containing only a postgresql section, single database, all=False. """ path = self.resources["postgresql.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("gzip", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("gzip", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database", ], config.postgresql.databases) def testParse_005(self): """ Parse config document containing only a postgresql section, multiple databases, all=False. """ path = self.resources["postgresql.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) def testParse_006(self): """ Parse config document containing only a postgresql section, no user, multiple databases, all=False. """ path = self.resources["postgresql.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual(None, config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual(None, config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestPostgresqlConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/splittests.py0000664000175000017500000013224411415165677022173 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: splittests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests split extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/split.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/split.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set SPLITTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that the split utility must be available. There is also one test that wants at least one non-English locale (fr_FR, ru_RU or pt_PT) available to check localization issues (but that test will just automatically be skipped if such a locale is not available). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import failUnlessAssignRaises, availableLocales from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.split import LocalConfig, SplitConfig, ByteQuantity from CedarBackup2.extend.split import _splitFile, _splitDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "split.conf.1", "split.conf.2", "split.conf.3", "split.conf.4", "split.conf.5", "tree21.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "SPLITTESTS_FULL" in os.environ: return os.environ["SPLITTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestSplitConfig class ########################## class TestSplitConfig(unittest.TestCase): """Tests for the SplitConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SplitConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessEqual(None, split.splitSize) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ split = SplitConfig(ByteQuantity("1.0", UNIT_BYTES), ByteQuantity("2.0", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) self.failUnlessEqual(ByteQuantity("2.0", UNIT_KBYTES), split.splitSize) def testConstructor_003(self): """ Test assignment of sizeLimit attribute, None value. """ split = SplitConfig(sizeLimit=ByteQuantity("1.0", UNIT_BYTES)) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) split.sizeLimit = None self.failUnlessEqual(None, split.sizeLimit) def testConstructor_004(self): """ Test assignment of sizeLimit attribute, valid value. """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) split.sizeLimit = ByteQuantity("1.0", UNIT_BYTES) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) def testConstructor_005(self): """ Test assignment of sizeLimit attribute, invalid value (empty). """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "") self.failUnlessEqual(None, split.sizeLimit) def testConstructor_006(self): """ Test assignment of sizeLimit attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "1.0 GB") self.failUnlessEqual(None, split.sizeLimit) def testConstructor_007(self): """ Test assignment of splitSize attribute, None value. """ split = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) split.splitSize = None self.failUnlessEqual(None, split.splitSize) def testConstructor_008(self): """ Test assignment of splitSize attribute, valid value. """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) split.splitSize = ByteQuantity("1.00", UNIT_KBYTES) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) def testConstructor_009(self): """ Test assignment of splitSize attribute, invalid value (empty). """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", "") self.failUnlessEqual(None, split.splitSize) def testConstructor_010(self): """ Test assignment of splitSize attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", 12) self.failUnlessEqual(None, split.splitSize) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ split1 = SplitConfig() split2 = SplitConfig() self.failUnlessEqual(split1, split2) self.failUnless(split1 == split2) self.failUnless(not split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(split1 >= split2) self.failUnless(not split1 != split2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessEqual(split1, split2) self.failUnless(split1 == split2) self.failUnless(not split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(split1 >= split2) self.failUnless(not split1 != split2) def testComparison_003(self): """ Test comparison of two differing objects, sizeLimit differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(sizeLimit=ByteQuantity("99", UNIT_KBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_004(self): """ Test comparison of two differing objects, sizeLimit differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_BYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_005(self): """ Test comparison of two differing objects, splitSize differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_006(self): """ Test comparison of two differing objects, splitSize differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("0.5", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the split configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.split) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.split) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["split.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of split attribute, None value. """ config = LocalConfig() config.split = None self.failUnlessEqual(None, config.split) def testConstructor_005(self): """ Test assignment of split attribute, valid value. """ config = LocalConfig() config.split = SplitConfig() self.failUnlessEqual(SplitConfig(), config.split) def testConstructor_006(self): """ Test assignment of split attribute, invalid value (not SplitConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "split", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.split = SplitConfig() config2 = LocalConfig() config2.split = SplitConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, split differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.split = SplitConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, split differs. """ config1 = LocalConfig() config1.split = SplitConfig(sizeLimit=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.split = SplitConfig(sizeLimit=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None split section. """ config = LocalConfig() config.split = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty split section. """ config = LocalConfig() config.split = SplitConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty split section with no values filled in. """ config = LocalConfig() config.split = SplitConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty split section with only one value filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), None) self.failUnlessRaises(ValueError, config.validate) config.split = SplitConfig(None, ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty split section with valid values filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["split.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.split) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.split) def testParse_002(self): """ Parse config document with filled-in values, size in bytes. """ path = self.resources["split.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) def testParse_003(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["split.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) def testParse_004(self): """ Parse config document with filled-in values, size in MB. """ path = self.resources["split.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) def testParse_005(self): """ Parse config document with filled-in values, size in GB. """ path = self.resources["split.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ split = SplitConfig() config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set, byte values. """ split = SplitConfig(ByteQuantity("57521.0", UNIT_BYTES), ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_003(self): """ Test with values set, KB values. """ split = SplitConfig(ByteQuantity("12", UNIT_KBYTES), ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_004(self): """ Test with values set, MB values. """ split = SplitConfig(ByteQuantity("12", UNIT_MBYTES), ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_005(self): """ Test with values set, GB values. """ split = SplitConfig(ByteQuantity("12", UNIT_GBYTES), ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in split.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def checkSplit(self, sourcePath, origSize, splitSize): """Checks that a file was split properly.""" wholeFiles = int(float(origSize) / float(splitSize)) leftoverBytes = int(float(origSize) % float(splitSize)) for i in range(0, wholeFiles): splitPath = "%s_%05d" % (sourcePath, i) self.failUnless(os.path.exists(splitPath)) self.failUnlessEqual(splitSize, os.stat(splitPath).st_size) if leftoverBytes > 0: splitPath = "%s_%05d" % (sourcePath, wholeFiles) self.failUnless(os.path.exists(splitPath)) self.failUnlessEqual(leftoverBytes, os.stat(splitPath).st_size) def findBadLocale(self): """ The split command localizes its output for certain locales. This breaks the parsing code in split.py. This method returns a list of the locales (if any) that are currently configured which could be expected to cause a failure if the localization-fixing code doesn't work. """ locales = availableLocales() if 'fr_FR' in locales: return 'fr_FR' if 'pl_PL' in locales: return 'pl_PL' if 'ru_RU' in locales: return 'ru_RU' return None #################### # Test _splitFile() #################### def testSplitFile_001(self): """ Test with a nonexistent file. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", INVALID_PATH ]) self.failIf(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) self.failUnlessRaises(ValueError, _splitFile, sourcePath, splitSize, None, None, removeSource=False) def testSplitFile_002(self): """ Test with integer split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.failUnless(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_003(self): """ Test with floating point split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320.1", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.failUnless(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_004(self): """ Test with integer split size, removeSource=True. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.failIf(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_005(self): """ Test with a local other than "C" or "en_US" set. """ locale = self.findBadLocale() if locale is not None: os.environ["LANG"] = locale os.environ["LC_ADDRESS"] = locale os.environ["LC_ALL"] = locale os.environ["LC_COLLATE"] = locale os.environ["LC_CTYPE"] = locale os.environ["LC_IDENTIFICATION"] = locale os.environ["LC_MEASUREMENT"] = locale os.environ["LC_MESSAGES"] = locale os.environ["LC_MONETARY"] = locale os.environ["LC_NAME"] = locale os.environ["LC_NUMERIC"] = locale os.environ["LC_PAPER"] = locale os.environ["LC_TELEPHONE"] = locale os.environ["LC_TIME"] = locale self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.failIf(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) ########################## # Test _splitDailyDir() ########################## def testSplitDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", INVALID_PATH, ]) self.failIf(os.path.exists(dailyDir)) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) self.failUnlessRaises(ValueError, _splitDailyDir, dailyDir, sizeLimit, splitSize, None, None) def testSplitDailyDir_002(self): """ Test with 1.0 MB limit. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) def testSplitDailyDir_003(self): """ Test with 100,000 byte limit, chopped down to 10 KB """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("100000", UNIT_BYTES) splitSize = ByteQuantity("10", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 10*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 10*1024) def testSplitDailyDir_004(self): """ Test with 99,999 byte limit, chopped down to 5,000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("99999", UNIT_BYTES) splitSize = ByteQuantity("5000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 5000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 5000) def testSplitDailyDir_005(self): """ Test with 99,998 byte limit, chopped down to 2500 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000.0", UNIT_BYTES) splitSize = ByteQuantity("2500", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 2500) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 2500) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 2500) def testSplitDailyDir_006(self): """ Test with 10,000 byte limit, chopped down to 1024 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000", UNIT_BYTES) splitSize = ByteQuantity("1.0", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1*1024) def testSplitDailyDir_007(self): """ Test with 9,999 byte limit, chopped down to 1000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("9999", UNIT_BYTES) splitSize = ByteQuantity("1000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1000) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file002"), 10000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1000) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestSplitConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestSplitConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/configtests.py0000664000175000017500000173170112143053141022266 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: configtests.py 1041 2013-05-10 02:05:13Z pronovic $ # Purpose : Tests configuration functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/config.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in config.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else. In particular, this is the case with the private validation functions (I use the private functions so I can test just the validations for one specific case, even if the public interface only exposes one broad validation). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract the XML and then feed it back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CONFIGTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.testutil import hexFloatLiteralAllowed from CedarBackup2.config import ActionHook, PreActionHook, PostActionHook, CommandOverride from CedarBackup2.config import ExtendedAction, ActionDependencies, BlankBehavior from CedarBackup2.config import CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer from CedarBackup2.config import ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig from CedarBackup2.config import CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config from CedarBackup2.config import ByteQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "cback.conf.1", "cback.conf.2", "cback.conf.3", "cback.conf.4", "cback.conf.5", "cback.conf.6", "cback.conf.7", "cback.conf.8", "cback.conf.9", "cback.conf.10", "cback.conf.11", "cback.conf.12", "cback.conf.13", "cback.conf.14", "cback.conf.15", "cback.conf.16", "cback.conf.17", "cback.conf.18", "cback.conf.19", "cback.conf.20", "cback.conf.21", "cback.conf.22", "cback.conf.23", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestByteQuantity class ########################## class TestByteQuantity(unittest.TestCase): """Tests for the ByteQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ByteQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(None, quantity.units) self.failUnlessEqual(0.0, quantity.bytes) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ quantity = ByteQuantity("6", UNIT_BYTES) self.failUnlessEqual("6", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = ByteQuantity(quantity="1.0") self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(0.0, quantity.bytes) # because no units are set quantity.quantity = None self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.bytes) def testConstructor_004(self): """ Test assignment of quantity attribute, valid values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.bytes) quantity.quantity = "1.0" self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.bytes) quantity.quantity = ".1" self.failUnlessEqual(".1", quantity.quantity) self.failUnlessEqual(0.1, quantity.bytes) quantity.quantity = "12" self.failUnlessEqual("12", quantity.quantity) self.failUnlessEqual(12.0, quantity.bytes) quantity.quantity = "0.5" self.failUnlessEqual("0.5", quantity.quantity) self.failUnlessEqual(0.5, quantity.bytes) quantity.quantity = "181281" self.failUnlessEqual("181281", quantity.quantity) self.failUnlessEqual(181281.0, quantity.bytes) quantity.quantity = "1E6" self.failUnlessEqual("1E6", quantity.quantity) self.failUnlessEqual(1.0e6, quantity.bytes) quantity.quantity = "0.25E2" self.failUnlessEqual("0.25E2", quantity.quantity) self.failUnlessEqual(0.25e2, quantity.bytes) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't quantity.quantity = "0xAC" self.failUnlessEqual("0xAC", quantity.quantity) self.failUnlessEqual(172.0, quantity.bytes) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.failUnlessEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not a floating point number). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.failUnlessEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.failUnlessEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of units attribute, None value. """ quantity = ByteQuantity(units=UNIT_BYTES) self.failUnlessEqual(UNIT_BYTES, quantity.units) quantity.units = None self.failUnlessEqual(None, quantity.units) def testConstructor_009(self): """ Test assignment of units attribute, valid values. """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.units) quantity.units = UNIT_BYTES self.failUnlessEqual(UNIT_BYTES, quantity.units) quantity.units = UNIT_KBYTES self.failUnlessEqual(UNIT_KBYTES, quantity.units) quantity.units = UNIT_MBYTES self.failUnlessEqual(UNIT_MBYTES, quantity.units) quantity.units = UNIT_GBYTES self.failUnlessEqual(UNIT_GBYTES, quantity.units) def testConstructor_010(self): """ Test assignment of units attribute, invalid value (empty). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "") self.failUnlessEqual(None, quantity.units) def testConstructor_011(self): """ Test assignment of units attribute, invalid value (not a valid unit). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", 16) self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", -2) self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "bytes") self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "B") self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "KB") self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "MB") self.failUnlessEqual(None, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "GB") self.failUnlessEqual(None, quantity.units) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = ByteQuantity() quantity2 = ByteQuantity() self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(quantity="12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004(self): """ Test comparison of two differing objects, quantity differs. """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_005(self): """ Test comparison of two differing objects, units differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(units=UNIT_MBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_006(self): """ Test comparison of two differing objects, units differs. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_KBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) ############################### # TestActionDependencies class ############################### class TestActionDependencies(unittest.TestCase): """Tests for the ActionDependencies class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionDependencies() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessEqual(None, dependencies.afterList) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ dependencies = ActionDependencies(["b", ], ["a", ]) self.failUnlessEqual(["b", ], dependencies.beforeList) self.failUnlessEqual(["a", ], dependencies.afterList) def testConstructor_003(self): """ Test assignment of beforeList attribute, None value. """ dependencies = ActionDependencies(beforeList=[]) self.failUnlessEqual([], dependencies.beforeList) dependencies.beforeList = None self.failUnlessEqual(None, dependencies.beforeList) def testConstructor_004(self): """ Test assignment of beforeList attribute, empty list. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) dependencies.beforeList = [] self.failUnlessEqual([], dependencies.beforeList) def testConstructor_005(self): """ Test assignment of beforeList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) dependencies.beforeList = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], dependencies.beforeList) def testConstructor_006(self): """ Test assignment of beforeList attribute, non-empty list, invalid value. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["KEN", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["hello, world" ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["dash-word", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", [None, ]) self.failUnlessEqual(None, dependencies.beforeList) def testConstructor_007(self): """ Test assignment of beforeList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["ken", "dash-word", ]) def testConstructor_008(self): """ Test assignment of afterList attribute, None value. """ dependencies = ActionDependencies(afterList=[]) self.failUnlessEqual([], dependencies.afterList) dependencies.afterList = None self.failUnlessEqual(None, dependencies.afterList) def testConstructor_009(self): """ Test assignment of afterList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) dependencies.afterList = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], dependencies.afterList) def testConstructor_010(self): """ Test assignment of afterList attribute, non-empty list, invalid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) def testConstructor_011(self): """ Test assignment of afterList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["KEN", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["hello, world" ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["dash-word", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", [None, ]) self.failUnlessEqual(None, dependencies.afterList) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ dependencies1 = ActionDependencies() dependencies2 = ActionDependencies() self.failUnlessEqual(dependencies1, dependencies2) self.failUnless(dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(not dependencies1 != dependencies2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnlessEqual(dependencies1, dependencies2) self.failUnless(dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(not dependencies1 != dependencies2) def testComparison_003(self): """ Test comparison of two differing objects, beforeList differs (one None). """ dependencies1 = ActionDependencies(beforeList=None, afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_004(self): """ Test comparison of two differing objects, beforeList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=[], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_005(self): """ Test comparison of two differing objects, beforeList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["b", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_006(self): """ Test comparison of two differing objects, afterList differs (one None). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=None) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_007(self): """ Test comparison of two differing objects, afterList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=[]) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_008(self): """ Test comparison of two differing objects, afterList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["a", ]) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) ####################### # TestActionHook class ####################### class TestActionHook(unittest.TestCase): """Tests for the ActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = ActionHook() self.failUnlessEqual(False, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = ActionHook(action="action", command="command") self.failUnlessEqual(False, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = ActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = ActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = ActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = ActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = ActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = ActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = ActionHook() hook2 = ActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = ActionHook(action="action2", command="command") hook2 = ActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = ActionHook(action="action", command=None) hook2 = ActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = ActionHook(action="action", command="command1") hook2 = ActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################## # TestPreActionHook class ########################## class TestPreActionHook(unittest.TestCase): """Tests for the PreActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PreActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PreActionHook() self.failUnlessEqual(True, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PreActionHook(action="action", command="command") self.failUnlessEqual(True, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PreActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PreActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PreActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PreActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PreActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PreActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PreActionHook() hook2 = PreActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PreActionHook(action="action2", command="command") hook2 = PreActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PreActionHook(action="action", command=None) hook2 = PreActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PreActionHook(action="action", command="command1") hook2 = PreActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################### # TestPostActionHook class ########################### class TestPostActionHook(unittest.TestCase): """Tests for the PostActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PostActionHook() self.failUnlessEqual(False, hook._before) self.failUnlessEqual(True, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PostActionHook(action="action", command="command") self.failUnlessEqual(False, hook._before) self.failUnlessEqual(True, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PostActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PostActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PostActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PostActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PostActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PostActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PostActionHook() hook2 = PostActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PostActionHook(action="action2", command="command") hook2 = PostActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PostActionHook(action="action", command=None) hook2 = PostActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PostActionHook(action="action", command="command1") hook2 = PostActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################## # TestBlankBehavior class ########################## class TestBlankBehavior(unittest.TestCase): """Tests for the BlankBehavior class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BlankBehavior() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankMode) self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior(blankMode="daily", blankFactor="1.0") self.failUnlessEqual("daily", behavior.blankMode) self.failUnlessEqual("1.0", behavior.blankFactor) def testConstructor_003(self): """ Test assignment of blankMode, None value. """ behavior = BlankBehavior(blankMode="daily") self.failUnlessEqual("daily", behavior.blankMode) behavior.blankMode = None self.failUnlessEqual(None, behavior.blankMode) def testConstructor_004(self): """ Test assignment of blankMode attribute, valid value. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankMode) behavior.blankMode = "daily" self.failUnlessEqual("daily", behavior.blankMode) behavior.blankMode = "weekly" self.failUnlessEqual("weekly", behavior.blankMode) def testConstructor_005(self): """ Test assignment of blankFactor attribute, None value. """ behavior = BlankBehavior(blankFactor="1.3") self.failUnlessEqual("1.3", behavior.blankFactor) behavior.blankFactor = None self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_006(self): """ Test assignment of blankFactor attribute, valid values. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) behavior.blankFactor = "1.0" self.failUnlessEqual("1.0", behavior.blankFactor) behavior.blankFactor = ".1" self.failUnlessEqual(".1", behavior.blankFactor) behavior.blankFactor = "12" self.failUnlessEqual("12", behavior.blankFactor) behavior.blankFactor = "0.5" self.failUnlessEqual("0.5", behavior.blankFactor) behavior.blankFactor = "181281" self.failUnlessEqual("181281", behavior.blankFactor) behavior.blankFactor = "1E6" self.failUnlessEqual("1E6", behavior.blankFactor) behavior.blankFactor = "0.25E2" self.failUnlessEqual("0.25E2", behavior.blankFactor) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't behavior.blankFactor = "0xAC" self.failUnlessEqual("0xAC", behavior.blankFactor) def testConstructor_007(self): """ Test assignment of blankFactor attribute, invalid value (empty). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "") self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_008(self): """ Test assignment of blankFactor attribute, invalid value (not a floating point number). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "blech") self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_009(self): """ Test assignment of blankFactor store attribute, invalid value (negative number). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-3") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-6.8") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-0.2") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-.1") self.failUnlessEqual(None, behavior.blankFactor) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ behavior1 = BlankBehavior() behavior2 = BlankBehavior() self.failUnlessEqual(behavior1, behavior2) self.failUnless(behavior1 == behavior2) self.failUnless(not behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(behavior1 >= behavior2) self.failUnless(not behavior1 != behavior2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnlessEqual(behavior1, behavior2) self.failUnless(behavior1 == behavior2) self.failUnless(not behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(behavior1 >= behavior2) self.failUnless(not behavior1 != behavior2) def testComparison_003(self): """ Test comparison of two different objects, blankMode differs (one None). """ behavior1 = BlankBehavior(None, blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_004(self): """ Test comparison of two different objects, blankMode differs. """ behavior1 = BlankBehavior(blankMode="daily", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_005(self): """ Test comparison of two different objects, blankFactor differs (one None). """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor=None) behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_006(self): """ Test comparison of two different objects, blankFactor differs. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="0.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) ########################### # TestExtendedAction class ########################### class TestExtendedAction(unittest.TestCase): """Tests for the ExtendedAction class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtendedAction() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessEqual(None, action.module) self.failUnlessEqual(None, action.function) self.failUnlessEqual(None, action.index) self.failUnlessEqual(None, action.dependencies) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ action = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.failUnlessEqual("one", action.name) self.failUnlessEqual("two", action.module) self.failUnlessEqual("three", action.function) self.failUnlessEqual(4, action.index) self.failUnlessEqual(ActionDependencies(), action.dependencies) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ action = ExtendedAction(name="name") self.failUnlessEqual("name", action.name) action.name = None self.failUnlessEqual(None, action.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.name) action.name = "name" self.failUnlessEqual("name", action.name) action.name = "9" self.failUnlessEqual("9", action.name) action.name = "name99name" self.failUnlessEqual("name99name", action.name) action.name = "12action" self.failUnlessEqual("12action", action.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "") self.failUnlessEqual(None, action.name) def testConstructor_006(self): """ Test assignment of name attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "Something") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "what_ever") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "_BOGUS") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "stuff-here") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "/more/stuff") self.failUnlessEqual(None, action.name) def testConstructor_007(self): """ Test assignment of module attribute, None value. """ action = ExtendedAction(module="module") self.failUnlessEqual("module", action.module) action.module = None self.failUnlessEqual(None, action.module) def testConstructor_008(self): """ Test assignment of module attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.module) action.module = "module" self.failUnlessEqual("module", action.module) action.module = "stuff" self.failUnlessEqual("stuff", action.module) action.module = "stuff.something" self.failUnlessEqual("stuff.something", action.module) action.module = "_identifier.__another.one_more__" self.failUnlessEqual("_identifier.__another.one_more__", action.module) def testConstructor_009(self): """ Test assignment of module attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "") self.failUnlessEqual(None, action.module) def testConstructor_010(self): """ Test assignment of module attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "9something") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "_bogus.") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "-bogus") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "/BOGUS") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "really._really__.___really.long.bad.path.") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", ".really._really__.___really.long.bad.path") self.failUnlessEqual(None, action.module) def testConstructor_011(self): """ Test assignment of function attribute, None value. """ action = ExtendedAction(function="function") self.failUnlessEqual("function", action.function) action.function = None self.failUnlessEqual(None, action.function) def testConstructor_012(self): """ Test assignment of function attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.function) action.function = "function" self.failUnlessEqual("function", action.function) action.function = "_stuff" self.failUnlessEqual("_stuff", action.function) action.function = "moreStuff9" self.failUnlessEqual("moreStuff9", action.function) action.function = "__identifier__" self.failUnlessEqual("__identifier__", action.function) def testConstructor_013(self): """ Test assignment of function attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "") self.failUnlessEqual(None, action.function) def testConstructor_014(self): """ Test assignment of function attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "9something") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "one.two") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "-bogus") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "/BOGUS") self.failUnlessEqual(None, action.function) def testConstructor_015(self): """ Test assignment of index attribute, None value. """ action = ExtendedAction(index=1) self.failUnlessEqual(1, action.index) action.index = None self.failUnlessEqual(None, action.index) def testConstructor_016(self): """ Test assignment of index attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.index) action.index = 1 self.failUnlessEqual(1, action.index) def testConstructor_017(self): """ Test assignment of index attribute, invalid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.index) self.failUnlessAssignRaises(ValueError, action, "index", "ken") self.failUnlessEqual(None, action.index) def testConstructor_018(self): """ Test assignment of dependencies attribute, None value. """ action = ExtendedAction(dependencies=ActionDependencies()) self.failUnlessEqual(ActionDependencies(), action.dependencies) action.dependencies = None self.failUnlessEqual(None, action.dependencies) def testConstructor_019(self): """ Test assignment of dependencies attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.dependencies) action.dependencies = ActionDependencies() self.failUnlessEqual(ActionDependencies(), action.dependencies) def testConstructor_020(self): """ Test assignment of dependencies attribute, invalid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.dependencies) self.failUnlessAssignRaises(ValueError, action, "dependencies", "ken") self.failUnlessEqual(None, action.dependencies) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ action1 = ExtendedAction() action2 = ExtendedAction() self.failUnlessEqual(action1, action2) self.failUnless(action1 == action2) self.failUnless(not action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(action1 >= action2) self.failUnless(not action1 != action2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ action1 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) action2 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.failUnless(action1 == action2) self.failUnless(not action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(action1 >= action2) self.failUnless(not action1 != action2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ action1 = ExtendedAction(name="name") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ action1 = ExtendedAction("name2", "two", "three", 4) action2 = ExtendedAction("name1", "two", "three", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_005(self): """ Test comparison of two differing objects, module differs (one None). """ action1 = ExtendedAction(module="whatever") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_006(self): """ Test comparison of two differing objects, module differs. """ action1 = ExtendedAction("one", "MODULE", "three", 4) action2 = ExtendedAction("one", "two", "three", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_007(self): """ Test comparison of two differing objects, function differs (one None). """ action1 = ExtendedAction(function="func1") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_008(self): """ Test comparison of two differing objects, function differs. """ action1 = ExtendedAction("one", "two", "func1", 4) action2 = ExtendedAction("one", "two", "func2", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_009(self): """ Test comparison of two differing objects, index differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(index=42) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_010(self): """ Test comparison of two differing objects, index differs. """ action1 = ExtendedAction("one", "two", "three", 99) action2 = ExtendedAction("one", "two", "three", 12) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_011(self): """ Test comparison of two differing objects, dependencies differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(dependencies=ActionDependencies()) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_012(self): """ Test comparison of two differing objects, dependencies differs. """ action1 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=[])) action2 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=["ken", ])) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) ############################ # TestCommandOverride class ############################ class TestCommandOverride(unittest.TestCase): """Tests for the CommandOverride class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CommandOverride() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ override = CommandOverride() self.failUnlessEqual(None, override.command) self.failUnlessEqual(None, override.absolutePath) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ override = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnlessEqual("command", override.command) self.failUnlessEqual("/path/to/something", override.absolutePath) def testConstructor_003(self): """ Test assignment of command attribute, None value. """ override = CommandOverride(command="command") self.failUnlessEqual("command", override.command) override.command = None self.failUnlessEqual(None, override.command) def testConstructor_004(self): """ Test assignment of command attribute, valid value. """ override = CommandOverride() self.failUnlessEqual(None, override.command) override.command = "command" self.failUnlessEqual("command", override.command) def testConstructor_005(self): """ Test assignment of command attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "command", "") override.command = None def testConstructor_006(self): """ Test assignment of absolutePath attribute, None value. """ override = CommandOverride(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", override.absolutePath) override.absolutePath = None self.failUnlessEqual(None, override.absolutePath) def testConstructor_007(self): """ Test assignment of absolutePath attribute, valid value. """ override = CommandOverride() self.failUnlessEqual(None, override.absolutePath) override.absolutePath = "/path/to/something" self.failUnlessEqual("/path/to/something", override.absolutePath) def testConstructor_008(self): """ Test assignment of absolutePath attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "path/to/something/relative") override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "") override.command = None ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ override1 = CommandOverride() override2 = CommandOverride() self.failUnlessEqual(override1, override2) self.failUnless(override1 == override2) self.failUnless(not override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(override1 >= override2) self.failUnless(not override1 != override2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnlessEqual(override1, override2) self.failUnless(override1 == override2) self.failUnless(not override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(override1 >= override2) self.failUnless(not override1 != override2) def testComparison_003(self): """ Test comparison of differing objects, command differs (one None). """ override1 = CommandOverride(command=None, absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnless(not override1 == override2) self.failUnless(override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(not override1 >= override2) self.failUnless(override1 != override2) def testComparison_004(self): """ Test comparison of differing objects, command differs. """ override1 = CommandOverride(command="command2", absolutePath="/path/to/something") override2 = CommandOverride(command="command1", absolutePath="/path/to/something") self.failUnless(not override1 == override2) self.failUnless(not override1 < override2) self.failUnless(not override1 <= override2) self.failUnless(override1 > override2) self.failUnless(override1 >= override2) self.failUnless(override1 != override2) def testComparison_005(self): """ Test comparison of differing objects, absolutePath differs (one None). """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath=None) self.failUnless(not override1 == override2) self.failUnless(not override1 < override2) self.failUnless(not override1 <= override2) self.failUnless(override1 > override2) self.failUnless(override1 >= override2) self.failUnless(override1 != override2) def testComparison_006(self): """ Test comparison of differing objects, absolutePath differs. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something1") override2 = CommandOverride(command="command", absolutePath="/path/to/something2") self.failUnless(not override1 == override2) self.failUnless(override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(not override1 >= override2) self.failUnless(override1 != override2) ######################## # TestCollectFile class ######################## class TestCollectFile(unittest.TestCase): """Tests for the CollectFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectFile = CollectFile("/etc/whatever", "incr", "tar") self.failUnlessEqual("/etc/whatever", collectFile.absolutePath) self.failUnlessEqual("incr", collectFile.collectMode) self.failUnlessEqual("tar", collectFile.archiveMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectFile = CollectFile(absolutePath="/whatever") self.failUnlessEqual("/whatever", collectFile.absolutePath) collectFile.absolutePath = None self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) collectFile.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", collectFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "") self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "whatever") self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectFile = CollectFile(collectMode="incr") self.failUnlessEqual("incr", collectFile.collectMode) collectFile.collectMode = None self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) collectFile.collectMode = "daily" self.failUnlessEqual("daily", collectFile.collectMode) collectFile.collectMode = "weekly" self.failUnlessEqual("weekly", collectFile.collectMode) collectFile.collectMode = "incr" self.failUnlessEqual("incr", collectFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "") self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "bogus") self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectFile = CollectFile(archiveMode="tar") self.failUnlessEqual("tar", collectFile.archiveMode) collectFile.archiveMode = None self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) collectFile.archiveMode = "tar" self.failUnlessEqual("tar", collectFile.archiveMode) collectFile.archiveMode = "targz" self.failUnlessEqual("targz", collectFile.archiveMode) collectFile.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collectFile.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "") self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "bogus") self.failUnlessEqual(None, collectFile.archiveMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectFile1 = CollectFile() collectFile2 = CollectFile() self.failUnlessEqual(collectFile1, collectFile2) self.failUnless(collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(not collectFile1 != collectFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.failUnless(collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(not collectFile1 != collectFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(absolutePath="/whatever") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/stuff", "incr", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(collectMode="incr") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "daily", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(not collectFile1 <= collectFile2) self.failUnless(collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(archiveMode="tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "targz") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(not collectFile1 <= collectFile2) self.failUnless(collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) ####################### # TestCollectDir class ####################### class TestCollectDir(unittest.TestCase): """Tests for the CollectDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessEqual(None, collectDir.ignoreFile) self.failUnlessEqual(None, collectDir.linkDepth) self.failUnlessEqual(False, collectDir.dereference) self.failUnlessEqual(None, collectDir.recursionLevel) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessEqual(None, collectDir.relativeExcludePaths) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectDir = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 2, True, 6) self.failUnlessEqual("/etc/whatever", collectDir.absolutePath) self.failUnlessEqual("incr", collectDir.collectMode) self.failUnlessEqual("tar", collectDir.archiveMode) self.failUnlessEqual(".ignore", collectDir.ignoreFile) self.failUnlessEqual(2, collectDir.linkDepth) self.failUnlessEqual(True, collectDir.dereference) self.failUnlessEqual(6, collectDir.recursionLevel) self.failUnlessEqual([], collectDir.absoluteExcludePaths) self.failUnlessEqual([], collectDir.relativeExcludePaths) self.failUnlessEqual([], collectDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectDir = CollectDir(absolutePath="/whatever") self.failUnlessEqual("/whatever", collectDir.absolutePath) collectDir.absolutePath = None self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) collectDir.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", collectDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "") self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "whatever") self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectDir = CollectDir(collectMode="incr") self.failUnlessEqual("incr", collectDir.collectMode) collectDir.collectMode = None self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) collectDir.collectMode = "daily" self.failUnlessEqual("daily", collectDir.collectMode) collectDir.collectMode = "weekly" self.failUnlessEqual("weekly", collectDir.collectMode) collectDir.collectMode = "incr" self.failUnlessEqual("incr", collectDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "") self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "bogus") self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectDir = CollectDir(archiveMode="tar") self.failUnlessEqual("tar", collectDir.archiveMode) collectDir.archiveMode = None self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) collectDir.archiveMode = "tar" self.failUnlessEqual("tar", collectDir.archiveMode) collectDir.archiveMode = "targz" self.failUnlessEqual("targz", collectDir.archiveMode) collectDir.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collectDir.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "") self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "bogus") self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_015(self): """ Test assignment of ignoreFile attribute, None value. """ collectDir = CollectDir(ignoreFile="ignore") self.failUnlessEqual("ignore", collectDir.ignoreFile) collectDir.ignoreFile = None self.failUnlessEqual(None, collectDir.ignoreFile) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.ignoreFile) collectDir.ignoreFile = "ignorefile" self.failUnlessEqual("ignorefile", collectDir.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.ignoreFile) self.failUnlessAssignRaises(ValueError, collectDir, "ignoreFile", "") self.failUnlessEqual(None, collectDir.ignoreFile) def testConstructor_018(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collectDir = CollectDir(absoluteExcludePaths=[]) self.failUnlessEqual([], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = None self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = [] self.failUnlessEqual([], collectDir.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", ] self.failUnlessEqual(["/whatever", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], collectDir.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["notabsolute", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["/good", "bad", "/alsogood", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of relativeExcludePaths attribute, None value. """ collectDir = CollectDir(relativeExcludePaths=[]) self.failUnlessEqual([], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = None self.failUnlessEqual(None, collectDir.relativeExcludePaths) def testConstructor_026(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = [] self.failUnlessEqual([], collectDir.relativeExcludePaths) def testConstructor_027(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) def testConstructor_028(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], collectDir.relativeExcludePaths) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, None value. """ collectDir = CollectDir(excludePatterns=[]) self.failUnlessEqual([], collectDir.excludePatterns) collectDir.excludePatterns = None self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_030(self): """ Test assignment of excludePatterns attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = [] self.failUnlessEqual([], collectDir.excludePatterns) def testConstructor_031(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], collectDir.excludePatterns) collectDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], collectDir.excludePatterns) def testConstructor_032(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], collectDir.excludePatterns) collectDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], collectDir.excludePatterns) def testConstructor_033(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_034(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "*", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_035(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "valid", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_036(self): """ Test assignment of linkDepth attribute, None value. """ collectDir = CollectDir(linkDepth=1) self.failUnlessEqual(1, collectDir.linkDepth) collectDir.linkDepth = None self.failUnlessEqual(None, collectDir.linkDepth) def testConstructor_037(self): """ Test assignment of linkDepth attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.linkDepth) collectDir.linkDepth = 1 self.failUnlessEqual(1, collectDir.linkDepth) def testConstructor_038(self): """ Test assignment of linkDepth attribute, invalid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.linkDepth) self.failUnlessAssignRaises(ValueError, collectDir, "linkDepth", "ken") self.failUnlessEqual(None, collectDir.linkDepth) def testConstructor_039(self): """ Test assignment of dereference attribute, None value. """ collectDir = CollectDir(dereference=True) self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = None self.failUnlessEqual(False, collectDir.dereference) def testConstructor_040(self): """ Test assignment of dereference attribute, valid value (real boolean). """ collectDir = CollectDir() self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = True self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = False self.failUnlessEqual(False, collectDir.dereference) def testConstructor_041(self): """ Test assignment of dereference attribute, valid value (expression). """ collectDir = CollectDir() self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = 0 self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = [] self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = None self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = ['a'] self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = 3 self.failUnlessEqual(True, collectDir.dereference) def testConstructor_042(self): """ Test assignment of recursionLevel attribute, None value. """ collectDir = CollectDir(recursionLevel=1) self.failUnlessEqual(1, collectDir.recursionLevel) collectDir.recursionLevel = None self.failUnlessEqual(None, collectDir.recursionLevel) def testConstructor_043(self): """ Test assignment of recursionLevel attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.recursionLevel) collectDir.recursionLevel = 1 self.failUnlessEqual(1, collectDir.recursionLevel) def testConstructor_044(self): """ Test assignment of recursionLevel attribute, invalid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.recursionLevel) self.failUnlessAssignRaises(ValueError, collectDir, "recursionLevel", "ken") self.failUnlessEqual(None, collectDir.recursionLevel) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectDir1 = CollectDir() collectDir2 = CollectDir() self.failUnlessEqual(collectDir1, collectDir2) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absolutePath="/whatever") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_005(self): """ Test comparison of two differing objects, absolutePath differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/stuff", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(collectMode="incr") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "daily", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(archiveMode="tar") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_009(self): """ Test comparison of two differing objects, archiveMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "targz", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(ignoreFile="ignore") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_011(self): """ Test comparison of two differing objects, ignoreFile differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=["/whatever", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/whatever", ], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_015(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", ], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", "/something", ], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) # note: different than standard due to unsorted list self.failUnless(not collectDir1 <= collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 > collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 >= collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 != collectDir2) def testComparison_016(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_017(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=["stuff", "other", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_018(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_019(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["two", ], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_020(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_021(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=["one", "two", "three", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_022(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["pattern", ], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_023(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p1", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p2", ], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_024(self): """ Test comparison of two differing objects, linkDepth differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(linkDepth=1) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_025(self): """ Test comparison of two differing objects, linkDepth differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 2, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_026(self): """ Test comparison of two differing objects, dereference differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(dereference=True) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_027(self): """ Test comparison of two differing objects, dereference differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, False, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_028(self): """ Test comparison of two differing objects, recursionLevel differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(recursionLevel=1) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_029(self): """ Test comparison of two differing objects, recursionLevel differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 5) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) ##################### # TestPurgeDir class ##################### class TestPurgeDir(unittest.TestCase): """Tests for the PurgeDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ purgeDir = PurgeDir("/whatever", 0) self.failUnlessEqual("/whatever", purgeDir.absolutePath) self.failUnlessEqual(0, purgeDir.retainDays) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ purgeDir = PurgeDir(absolutePath="/whatever") self.failUnlessEqual("/whatever", purgeDir.absolutePath) purgeDir.absolutePath = None self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) purgeDir.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", purgeDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "") self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "bogus") self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_007(self): """ Test assignment of retainDays attribute, None value. """ purgeDir = PurgeDir(retainDays=12) self.failUnlessEqual(12, purgeDir.retainDays) purgeDir.retainDays = None self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_008(self): """ Test assignment of retainDays attribute, valid value (integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) purgeDir.retainDays = 12 self.failUnlessEqual(12, purgeDir.retainDays) def testConstructor_009(self): """ Test assignment of retainDays attribute, valid value (string representing integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) purgeDir.retainDays = "12" self.failUnlessEqual(12, purgeDir.retainDays) def testConstructor_010(self): """ Test assignment of retainDays attribute, invalid value (empty string). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "") self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_011(self): """ Test assignment of retainDays attribute, invalid value (non-integer, like a list). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", []) self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_012(self): """ Test assignment of retainDays attribute, invalid value (string representing non-integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "blech") self.failUnlessEqual(None, purgeDir.retainDays) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir() self.failUnlessEqual(purgeDir1, purgeDir2) self.failUnless(purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(not purgeDir1 != purgeDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ purgeDir1 = PurgeDir("/etc/whatever", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failUnless(purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(not purgeDir1 != purgeDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(absolutePath="/whatever") self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ purgeDir1 = PurgeDir("/etc/blech", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_005(self): """ Test comparison of two differing objects, retainDays differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(retainDays=365) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_006(self): """ Test comparison of two differing objects, retainDays differs. """ purgeDir1 = PurgeDir("/etc/whatever", 365) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(not purgeDir1 <= purgeDir2) self.failUnless(purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalPeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessEqual(None, localPeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ localPeer = LocalPeer("myname", "/whatever", "all") self.failUnlessEqual("myname", localPeer.name) self.failUnlessEqual("/whatever", localPeer.collectDir) self.failUnlessEqual("all", localPeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ localPeer = LocalPeer(name="myname") self.failUnlessEqual("myname", localPeer.name) localPeer.name = None self.failUnlessEqual(None, localPeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) localPeer.name = "myname" self.failUnlessEqual("myname", localPeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) self.failUnlessAssignRaises(ValueError, localPeer, "name", "") self.failUnlessEqual(None, localPeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ localPeer = LocalPeer(collectDir="/whatever") self.failUnlessEqual("/whatever", localPeer.collectDir) localPeer.collectDir = None self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) localPeer.collectDir = "/etc/stuff" self.failUnlessEqual("/etc/stuff", localPeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "") self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "bogus") self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_010(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "none" self.failUnlessEqual("none", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "all" self.failUnlessEqual("all", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", localPeer.ignoreFailureMode) def testConstructor_011(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, localPeer, "ignoreFailureMode", "bogus") def testConstructor_012(self): """ Test assignment of ignoreFailureMode attribute, None value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = None self.failUnlessEqual(None, localPeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ localPeer1 = LocalPeer() localPeer2 = LocalPeer() self.failUnlessEqual(localPeer1, localPeer2) self.failUnless(localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(not localPeer1 != localPeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ localPeer1 = LocalPeer("myname", "/etc/stuff", "all") localPeer2 = LocalPeer("myname", "/etc/stuff", "all") self.failUnless(localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(not localPeer1 != localPeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(name="blech") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ localPeer1 = LocalPeer("name", "/etc/stuff", "all") localPeer2 = LocalPeer("name", "/etc/whatever", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(collectDir="/etc/whatever") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name2", "/etc/stuff", "all") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(not localPeer1 <= localPeer2) self.failUnless(localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_008(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(ignoreFailureMode="all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_009(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name1", "/etc/stuff", "none") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(not localPeer1 <= localPeer2) self.failUnless(localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) ####################### # TestRemotePeer class ####################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RemotePeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessEqual(None, remotePeer.remoteUser) self.failUnlessEqual(None, remotePeer.rcpCommand) self.failUnlessEqual(None, remotePeer.rshCommand) self.failUnlessEqual(None, remotePeer.cbackCommand) self.failUnlessEqual(False, remotePeer.managed) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessEqual(None, remotePeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ remotePeer = RemotePeer("myname", "/stuff", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failUnlessEqual("myname", remotePeer.name) self.failUnlessEqual("/stuff", remotePeer.collectDir) self.failUnlessEqual("backup", remotePeer.remoteUser) self.failUnlessEqual("scp -1 -B", remotePeer.rcpCommand) self.failUnlessEqual("ssh", remotePeer.rshCommand) self.failUnlessEqual("cback", remotePeer.cbackCommand) self.failUnlessEqual(True, remotePeer.managed) self.failUnlessEqual(["collect", ], remotePeer.managedActions) self.failUnlessEqual("all", remotePeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ remotePeer = RemotePeer(name="myname") self.failUnlessEqual("myname", remotePeer.name) remotePeer.name = None self.failUnlessEqual(None, remotePeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) remotePeer.name = "namename" self.failUnlessEqual("namename", remotePeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) self.failUnlessAssignRaises(ValueError, remotePeer, "name", "") self.failUnlessEqual(None, remotePeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ remotePeer = RemotePeer(collectDir="/etc/stuff") self.failUnlessEqual("/etc/stuff", remotePeer.collectDir) remotePeer.collectDir = None self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) remotePeer.collectDir = "/tmp" self.failUnlessEqual("/tmp", remotePeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "") self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "bogus/stuff/there") self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_010(self): """ Test assignment of remoteUser attribute, None value. """ remotePeer = RemotePeer(remoteUser="spot") self.failUnlessEqual("spot", remotePeer.remoteUser) remotePeer.remoteUser = None self.failUnlessEqual(None, remotePeer.remoteUser) def testConstructor_011(self): """ Test assignment of remoteUser attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.remoteUser) remotePeer.remoteUser = "spot" self.failUnlessEqual("spot", remotePeer.remoteUser) def testConstructor_012(self): """ Test assignment of remoteUser attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.remoteUser) self.failUnlessAssignRaises(ValueError, remotePeer, "remoteUser", "") self.failUnlessEqual(None, remotePeer.remoteUser) def testConstructor_013(self): """ Test assignment of rcpCommand attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.failUnlessEqual("scp", remotePeer.rcpCommand) def testConstructor_014(self): """ Test assignment of rcpCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.failUnlessEqual("scp", remotePeer.rcpCommand) def testConstructor_015(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rcpCommand", "") self.failUnlessEqual(None, remotePeer.rcpCommand) def testConstructor_016(self): """ Test assignment of rshCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rshCommand) remotePeer.rshCommand = "scp" self.failUnlessEqual("scp", remotePeer.rshCommand) def testConstructor_017(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rshCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rshCommand", "") self.failUnlessEqual(None, remotePeer.rshCommand) def testConstructor_018(self): """ Test assignment of cbackCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.cbackCommand) remotePeer.cbackCommand = "scp" self.failUnlessEqual("scp", remotePeer.cbackCommand) def testConstructor_019(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.cbackCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "cbackCommand", "") self.failUnlessEqual(None, remotePeer.cbackCommand) def testConstructor_021(self): """ Test assignment of managed attribute, None value. """ remotePeer = RemotePeer(managed=True) self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = None self.failUnlessEqual(False, remotePeer.managed) def testConstructor_022(self): """ Test assignment of managed attribute, valid value (real boolean). """ remotePeer = RemotePeer() self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = True self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = False self.failUnlessEqual(False, remotePeer.managed) def testConstructor_023(self): """ Test assignment of managed attribute, valid value (expression). """ remotePeer = RemotePeer() self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = 0 self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = [] self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = None self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = ['a'] self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = 3 self.failUnlessEqual(True, remotePeer.managed) def testConstructor_024(self): """ Test assignment of managedActions attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = None self.failUnlessEqual(None, remotePeer.managedActions) def testConstructor_025(self): """ Test assignment of managedActions attribute, empty list. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = [] self.failUnlessEqual([], remotePeer.managedActions) def testConstructor_026(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], remotePeer.managedActions) def testConstructor_027(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["KEN", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["hello, world" ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["dash-word", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", [None, ]) self.failUnlessEqual(None, remotePeer.managedActions) def testConstructor_028(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["ken", "dash-word", ]) def testConstructor_029(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "none" self.failUnlessEqual("none", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "all" self.failUnlessEqual("all", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", remotePeer.ignoreFailureMode) def testConstructor_030(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, remotePeer, "ignoreFailureMode", "bogus") def testConstructor_031(self): """ Test assignment of ignoreFailureMode attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = None self.failUnlessEqual(None, remotePeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer() self.failUnlessEqual(remotePeer1, remotePeer2) self.failUnless(remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(not remotePeer1 != remotePeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failUnless(remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(not remotePeer1 != remotePeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(name="name") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ remotePeer1 = RemotePeer("name1", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name2", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(collectDir="/tmp") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ remotePeer1 = RemotePeer("name", "/etc", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_007(self): """ Test comparison of two differing objects, remoteUser differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(remoteUser="spot") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_008(self): """ Test comparison of two differing objects, remoteUser differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "spot", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_009(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rcpCommand="scp") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_010(self): """ Test comparison of two differing objects, rcpCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -2 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_011(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rshCommand="ssh") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_012(self): """ Test comparison of two differing objects, rshCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh2", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh1", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_013(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(cbackCommand="cback") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_014(self): """ Test comparison of two differing objects, cbackCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback2", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback1", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_015(self): """ Test comparison of two differing objects, managed differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(managed=True) self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_016(self): """ Test comparison of two differing objects, managed differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", False, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_017(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_018(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_019(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all" ) remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_020(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "purge", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_021(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(ignoreFailureMode="all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_022(self): """ Test comparison of two differing objects, ignoreFailureMode differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "none") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) ############################ # TestReferenceConfig class ############################ class TestReferenceConfig(unittest.TestCase): """Tests for the ReferenceConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ReferenceConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) self.failUnlessEqual(None, reference.revision) self.failUnlessEqual(None, reference.description) self.failUnlessEqual(None, reference.generator) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ reference = ReferenceConfig("one", "two", "three", "four") self.failUnlessEqual("one", reference.author) self.failUnlessEqual("two", reference.revision) self.failUnlessEqual("three", reference.description) self.failUnlessEqual("four", reference.generator) def testConstructor_003(self): """ Test assignment of author attribute, None value. """ reference = ReferenceConfig(author="one") self.failUnlessEqual("one", reference.author) reference.author = None self.failUnlessEqual(None, reference.author) def testConstructor_004(self): """ Test assignment of author attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) reference.author = "one" self.failUnlessEqual("one", reference.author) def testConstructor_005(self): """ Test assignment of author attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) reference.author = "" self.failUnlessEqual("", reference.author) def testConstructor_006(self): """ Test assignment of revision attribute, None value. """ reference = ReferenceConfig(revision="one") self.failUnlessEqual("one", reference.revision) reference.revision = None self.failUnlessEqual(None, reference.revision) def testConstructor_007(self): """ Test assignment of revision attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.revision) reference.revision = "one" self.failUnlessEqual("one", reference.revision) def testConstructor_008(self): """ Test assignment of revision attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.revision) reference.revision = "" self.failUnlessEqual("", reference.revision) def testConstructor_009(self): """ Test assignment of description attribute, None value. """ reference = ReferenceConfig(description="one") self.failUnlessEqual("one", reference.description) reference.description = None self.failUnlessEqual(None, reference.description) def testConstructor_010(self): """ Test assignment of description attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.description) reference.description = "one" self.failUnlessEqual("one", reference.description) def testConstructor_011(self): """ Test assignment of description attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.description) reference.description = "" self.failUnlessEqual("", reference.description) def testConstructor_012(self): """ Test assignment of generator attribute, None value. """ reference = ReferenceConfig(generator="one") self.failUnlessEqual("one", reference.generator) reference.generator = None self.failUnlessEqual(None, reference.generator) def testConstructor_013(self): """ Test assignment of generator attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.generator) reference.generator = "one" self.failUnlessEqual("one", reference.generator) def testConstructor_014(self): """ Test assignment of generator attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.generator) reference.generator = "" self.failUnlessEqual("", reference.generator) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ reference1 = ReferenceConfig() reference2 = ReferenceConfig() self.failUnlessEqual(reference1, reference2) self.failUnless(reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(not reference1 != reference2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failUnless(reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(not reference1 != reference2) def testComparison_003(self): """ Test comparison of two differing objects, author differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(author="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_004(self): """ Test comparison of two differing objects, author differs (one empty). """ reference1 = ReferenceConfig("", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_005(self): """ Test comparison of two differing objects, author differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("author", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_006(self): """ Test comparison of two differing objects, revision differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(revision="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_007(self): """ Test comparison of two differing objects, revision differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_008(self): """ Test comparison of two differing objects, revision differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "revision", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_009(self): """ Test comparison of two differing objects, description differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(description="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_010(self): """ Test comparison of two differing objects, description differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_011(self): """ Test comparison of two differing objects, description differs. """ reference1 = ReferenceConfig("one", "two", "description", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_012(self): """ Test comparison of two differing objects, generator differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(generator="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_013(self): """ Test comparison of two differing objects, generator differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_014(self): """ Test comparison of two differing objects, generator differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "generator") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) ############################# # TestExtensionsConfig class ############################# class TestExtensionsConfig(unittest.TestCase): """Tests for the ExtensionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtensionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list), positional arguments. """ extensions = ExtensionsConfig([], None) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions = ExtensionsConfig([], "index") self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions = ExtensionsConfig([], "dependency") self.failUnlessEqual("dependency", extensions.orderMode) self.failUnlessEqual([], extensions.actions) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list), named arguments. """ extensions = ExtensionsConfig(orderMode=None, actions=[ExtendedAction(), ]) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="index", actions=[ExtendedAction(), ]) self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="dependency", actions=[ExtendedAction(), ]) self.failUnlessEqual("dependency", extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) def testConstructor_004(self): """ Test assignment of actions attribute, None value. """ extensions = ExtensionsConfig([]) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions.actions = None self.failUnlessEqual(None, extensions.actions) def testConstructor_005(self): """ Test assignment of actions attribute, [] value. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [] self.failUnlessEqual([], extensions.actions) def testConstructor_006(self): """ Test assignment of actions attribute, single valid entry. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [ExtendedAction(), ] self.failUnlessEqual([ExtendedAction(), ], extensions.actions) def testConstructor_007(self): """ Test assignment of actions attribute, multiple valid entries. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ] self.failUnlessEqual([ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ], extensions.actions) def testConstructor_009(self): """ Test assignment of actions attribute, single invalid entry (not an ExtendedAction). """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ RemotePeer(), ]) self.failUnlessEqual(None, extensions.actions) def testConstructor_010(self): """ Test assignment of actions attribute, mixed valid and invalid entries. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ ExtendedAction(), RemotePeer(), ]) self.failUnlessEqual(None, extensions.actions) def testConstructor_011(self): """ Test assignment of orderMode attribute, None value. """ extensions = ExtensionsConfig(orderMode="index") self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.orderMode = None self.failUnlessEqual(None, extensions.orderMode) def testConstructor_012(self): """ Test assignment of orderMode attribute, valid values. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.orderMode = "index" self.failUnlessEqual("index", extensions.orderMode) extensions.orderMode = "dependency" self.failUnlessEqual("dependency", extensions.orderMode) def testConstructor_013(self): """ Test assignment of orderMode attribute, invalid values. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "bogus") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indexes") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indices") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "dependencies") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ extensions1 = ExtensionsConfig() extensions2 = ExtensionsConfig() self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ extensions1 = ExtensionsConfig([], "index") extensions2 = ExtensionsConfig([], "index") self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ extensions1 = ExtensionsConfig([ExtendedAction(), ], "index") extensions2 = ExtensionsConfig([ExtendedAction(), ], "index") self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_004(self): """ Test comparison of two differing objects, actions differs (one None, one empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_005(self): """ Test comparison of two differing objects, actions differs (one None, one not empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_006(self): """ Test comparison of two differing objects, actions differs (one empty, one not empty). """ extensions1 = ExtensionsConfig([]) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_007(self): """ Test comparison of two differing objects, actions differs (both not empty). """ extensions1 = ExtensionsConfig([ExtendedAction(name="one"), ]) extensions2 = ExtensionsConfig([ExtendedAction(name="two"), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_008(self): """ Test comparison of differing objects, orderMode differs (one None). """ extensions1 = ExtensionsConfig([], None) extensions2 = ExtensionsConfig([], "index") self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_009(self): """ Test comparison of differing objects, orderMode differs. """ extensions1 = ExtensionsConfig([], "dependency") extensions2 = ExtensionsConfig([], "index") self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) ########################## # TestOptionsConfig class ########################## class TestOptionsConfig(unittest.TestCase): """Tests for the OptionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = OptionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessEqual(None, options.workingDir) self.failUnlessEqual(None, options.backupUser) self.failUnlessEqual(None, options.backupGroup) self.failUnlessEqual(None, options.rcpCommand) self.failUnlessEqual(None, options.rshCommand) self.failUnlessEqual(None, options.cbackCommand) self.failUnlessEqual(None, options.overrides) self.failUnlessEqual(None, options.hooks) self.failUnlessEqual(None, options.managedActions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", [], [], "ssh", "cback", []) self.failUnlessEqual("monday", options.startingDay) self.failUnlessEqual("/tmp", options.workingDir) self.failUnlessEqual("user", options.backupUser) self.failUnlessEqual("group", options.backupGroup) self.failUnlessEqual("scp -1 -B", options.rcpCommand) self.failUnlessEqual("ssh", options.rshCommand) self.failUnlessEqual("cback", options.cbackCommand) self.failUnlessEqual([], options.overrides) self.failUnlessEqual([], options.hooks) self.failUnlessEqual([], options.managedActions) def testConstructor_003(self): """ Test assignment of startingDay attribute, None value. """ options = OptionsConfig(startingDay="monday") self.failUnlessEqual("monday", options.startingDay) options.startingDay = None self.failUnlessEqual(None, options.startingDay) def testConstructor_004(self): """ Test assignment of startingDay attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) options.startingDay = "monday" self.failUnlessEqual("monday", options.startingDay) options.startingDay = "tuesday" self.failUnlessEqual("tuesday", options.startingDay) options.startingDay = "wednesday" self.failUnlessEqual("wednesday", options.startingDay) options.startingDay = "thursday" self.failUnlessEqual("thursday", options.startingDay) options.startingDay = "friday" self.failUnlessEqual("friday", options.startingDay) options.startingDay = "saturday" self.failUnlessEqual("saturday", options.startingDay) options.startingDay = "sunday" self.failUnlessEqual("sunday", options.startingDay) def testConstructor_005(self): """ Test assignment of startingDay attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "") self.failUnlessEqual(None, options.startingDay) def testConstructor_006(self): """ Test assignment of startingDay attribute, invalid value (not in list). """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "dienstag") # ha, ha, pretend I'm German self.failUnlessEqual(None, options.startingDay) def testConstructor_007(self): """ Test assignment of workingDir attribute, None value. """ options = OptionsConfig(workingDir="/tmp") self.failUnlessEqual("/tmp", options.workingDir) options.workingDir = None self.failUnlessEqual(None, options.workingDir) def testConstructor_008(self): """ Test assignment of workingDir attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) options.workingDir = "/tmp" self.failUnlessEqual("/tmp", options.workingDir) def testConstructor_009(self): """ Test assignment of workingDir attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "") self.failUnlessEqual(None, options.workingDir) def testConstructor_010(self): """ Test assignment of workingDir attribute, invalid value (non-absolute). """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "stuff") self.failUnlessEqual(None, options.workingDir) def testConstructor_011(self): """ Test assignment of backupUser attribute, None value. """ options = OptionsConfig(backupUser="user") self.failUnlessEqual("user", options.backupUser) options.backupUser = None self.failUnlessEqual(None, options.backupUser) def testConstructor_012(self): """ Test assignment of backupUser attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.backupUser) options.backupUser = "user" self.failUnlessEqual("user", options.backupUser) def testConstructor_013(self): """ Test assignment of backupUser attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.backupUser) self.failUnlessAssignRaises(ValueError, options, "backupUser", "") self.failUnlessEqual(None, options.backupUser) def testConstructor_014(self): """ Test assignment of backupGroup attribute, None value. """ options = OptionsConfig(backupGroup="group") self.failUnlessEqual("group", options.backupGroup) options.backupGroup = None self.failUnlessEqual(None, options.backupGroup) def testConstructor_015(self): """ Test assignment of backupGroup attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.backupGroup) options.backupGroup = "group" self.failUnlessEqual("group", options.backupGroup) def testConstructor_016(self): """ Test assignment of backupGroup attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.backupGroup) self.failUnlessAssignRaises(ValueError, options, "backupGroup", "") self.failUnlessEqual(None, options.backupGroup) def testConstructor_017(self): """ Test assignment of rcpCommand attribute, None value. """ options = OptionsConfig(rcpCommand="command") self.failUnlessEqual("command", options.rcpCommand) options.rcpCommand = None self.failUnlessEqual(None, options.rcpCommand) def testConstructor_018(self): """ Test assignment of rcpCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.rcpCommand) options.rcpCommand = "command" self.failUnlessEqual("command", options.rcpCommand) def testConstructor_019(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.rcpCommand) self.failUnlessAssignRaises(ValueError, options, "rcpCommand", "") self.failUnlessEqual(None, options.rcpCommand) def testConstructor_020(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), ] hooks = [ PreActionHook("collect", "ls -l"), ] managedActions = [ "collect", "purge", ] options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failUnlessEqual("monday", options.startingDay) self.failUnlessEqual("/tmp", options.workingDir) self.failUnlessEqual("user", options.backupUser) self.failUnlessEqual("group", options.backupGroup) self.failUnlessEqual("scp -1 -B", options.rcpCommand) self.failUnlessEqual("ssh", options.rshCommand) self.failUnlessEqual("cback", options.cbackCommand) self.failUnlessEqual(overrides, options.overrides) self.failUnlessEqual(hooks, options.hooks) self.failUnlessEqual(managedActions, options.managedActions) def testConstructor_021(self): """ Test assignment of overrides attribute, None value. """ collect = OptionsConfig(overrides=[]) self.failUnlessEqual([], collect.overrides) collect.overrides = None self.failUnlessEqual(None, collect.overrides) def testConstructor_022(self): """ Test assignment of overrides attribute, [] value. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [] self.failUnlessEqual([], collect.overrides) def testConstructor_023(self): """ Test assignment of overrides attribute, single valid entry. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), ] self.failUnlessEqual([CommandOverride("one", "/one"), ], collect.overrides) def testConstructor_024(self): """ Test assignment of overrides attribute, multiple valid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), CommandOverride("two", "/two"), ] self.failUnlessEqual([CommandOverride("one", "/one"), CommandOverride("two", "/two"), ], collect.overrides) def testConstructor_025(self): """ Test assignment of overrides attribute, single invalid entry (None). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ None, ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_026(self): """ Test assignment of overrides attribute, single invalid entry (not a CommandOverride). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_027(self): """ Test assignment of overrides attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", CommandOverride("one", "/one"), ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_028(self): """ Test assignment of hooks attribute, None value. """ collect = OptionsConfig(hooks=[]) self.failUnlessEqual([], collect.hooks) collect.hooks = None self.failUnlessEqual(None, collect.hooks) def testConstructor_029(self): """ Test assignment of hooks attribute, [] value. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [] self.failUnlessEqual([], collect.hooks) def testConstructor_030(self): """ Test assignment of hooks attribute, single valid entry. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [PreActionHook("stage", "df -k"), ] self.failUnlessEqual([PreActionHook("stage", "df -k"), ], collect.hooks) def testConstructor_031(self): """ Test assignment of hooks attribute, multiple valid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [ PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ] self.failUnlessEqual([PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ], collect.hooks) def testConstructor_032(self): """ Test assignment of hooks attribute, single invalid entry (None). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ None, ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_033(self): """ Test assignment of hooks attribute, single invalid entry (not a ActionHook). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_034(self): """ Test assignment of hooks attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", PreActionHook("stage", "df -k"), ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_035(self): """ Test assignment of rshCommand attribute, None value. """ options = OptionsConfig(rshCommand="command") self.failUnlessEqual("command", options.rshCommand) options.rshCommand = None self.failUnlessEqual(None, options.rshCommand) def testConstructor_036(self): """ Test assignment of rshCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.rshCommand) options.rshCommand = "command" self.failUnlessEqual("command", options.rshCommand) def testConstructor_037(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.rshCommand) self.failUnlessAssignRaises(ValueError, options, "rshCommand", "") self.failUnlessEqual(None, options.rshCommand) def testConstructor_038(self): """ Test assignment of cbackCommand attribute, None value. """ options = OptionsConfig(cbackCommand="command") self.failUnlessEqual("command", options.cbackCommand) options.cbackCommand = None self.failUnlessEqual(None, options.cbackCommand) def testConstructor_039(self): """ Test assignment of cbackCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.cbackCommand) options.cbackCommand = "command" self.failUnlessEqual("command", options.cbackCommand) def testConstructor_040(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.cbackCommand) self.failUnlessAssignRaises(ValueError, options, "cbackCommand", "") self.failUnlessEqual(None, options.cbackCommand) def testConstructor_041(self): """ Test assignment of managedActions attribute, None value. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = None self.failUnlessEqual(None, options.managedActions) def testConstructor_042(self): """ Test assignment of managedActions attribute, empty list. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = [] self.failUnlessEqual([], options.managedActions) def testConstructor_043(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], options.managedActions) def testConstructor_044(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["KEN", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["hello, world" ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["dash-word", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", [None, ]) self.failUnlessEqual(None, options.managedActions) def testConstructor_045(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["ken", "dash-word", ]) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ options1 = OptionsConfig() options2 = OptionsConfig() self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_003(self): """ Test comparison of two differing objects, startingDay differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(startingDay="monday") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_004(self): """ Test comparison of two differing objects, startingDay differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("tuesday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_005(self): """ Test comparison of two differing objects, workingDir differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(workingDir="/tmp") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_006(self): """ Test comparison of two differing objects, workingDir differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp/whatever", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_007(self): """ Test comparison of two differing objects, backupUser differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupUser="user") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_008(self): """ Test comparison of two differing objects, backupUser differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user2", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user1", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_009(self): """ Test comparison of two differing objects, backupGroup differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupGroup="group") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_010(self): """ Test comparison of two differing objects, backupGroup differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group1", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group2", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_011(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rcpCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_012(self): """ Test comparison of two differing objects, rcpCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -2 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_013(self): """ Test comparison of two differing objects, overrides differs (one None, one empty). """ overrides1 = None overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_014(self): """ Test comparison of two differing objects, overrides differs (one None, one not empty). """ overrides1 = None overrides2 = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2, "ssh") self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_015(self): """ Test comparison of two differing objects, overrides differs (one empty, one not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_016(self): """ Test comparison of two differing objects, overrides differs (both not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [ CommandOverride(), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_017(self): """ Test comparison of two differing objects, hooks differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = None hooks2 = [] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_018(self): """ Test comparison of two differing objects, hooks differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 != options2) def testComparison_019(self): """ Test comparison of two differing objects, hooks differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PreActionHook("stage", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_020(self): """ Test comparison of two differing objects, hooks differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_021(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_022(self): """ Test comparison of two differing objects, rshCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh2", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh1", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_023(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_024(self): """ Test comparison of two differing objects, cbackCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback1", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback2", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_025(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_026(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_027(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [] managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_028(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [ "collect", ] managedActions2 = [ "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) #################################### # Test add and replace of overrides #################################### def testOverrides_001(self): """ Test addOverride() with no existing overrides. """ options = OptionsConfig() options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_002(self): """ Test addOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_003(self): """ Test addOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/one"), ], options.overrides) def testOverrides_004(self): """ Test replaceOverride() with no existing overrides. """ options = OptionsConfig() options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_005(self): """ Test replaceOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_006(self): """ Test replaceOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) ######################## # TestPeersConfig class ######################## class TestPeersConfig(unittest.TestCase): """Tests for the PeersConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PeersConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ peers = PeersConfig([], []) self.failUnlessEqual([], peers.localPeers) self.failUnlessEqual([], peers.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ peers = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual([LocalPeer(), ], peers.localPeers) self.failUnlessEqual([RemotePeer(), ], peers.remotePeers) def testConstructor_004(self): """ Test assignment of localPeers attribute, None value. """ peers = PeersConfig(localPeers=[]) self.failUnlessEqual([], peers.localPeers) peers.localPeers = None self.failUnlessEqual(None, peers.localPeers) def testConstructor_005(self): """ Test assignment of localPeers attribute, empty list. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [] self.failUnlessEqual([], peers.localPeers) def testConstructor_006(self): """ Test assignment of localPeers attribute, single valid entry. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(), ] self.failUnlessEqual([LocalPeer(), ], peers.localPeers) def testConstructor_007(self): """ Test assignment of localPeers attribute, multiple valid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.failUnlessEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], peers.localPeers) def testConstructor_008(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [None, ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [RemotePeer(), ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_011(self): """ Test assignment of remotePeers attribute, None value. """ peers = PeersConfig(remotePeers=[]) self.failUnlessEqual([], peers.remotePeers) peers.remotePeers = None self.failUnlessEqual(None, peers.remotePeers) def testConstructor_012(self): """ Test assignment of remotePeers attribute, empty list. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [] self.failUnlessEqual([], peers.remotePeers) def testConstructor_013(self): """ Test assignment of remotePeers attribute, single valid entry. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), ] self.failUnlessEqual([RemotePeer(name="one"), ], peers.remotePeers) def testConstructor_014(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.failUnlessEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], peers.remotePeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [None, ]) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), ]) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, peers.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ peers1 = PeersConfig() peers2 = PeersConfig() self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ peers1 = PeersConfig([], []) peers2 = PeersConfig([], []) self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_004(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_005(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ peers1 = PeersConfig([], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(name="one"), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(name="two"), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_008(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], []) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_009(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], []) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(name="two"), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(name="one"), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(not peers1 <= peers2) self.failUnless(peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(peers1 != peers2) ########################## # TestCollectConfig class ########################## class TestCollectConfig(unittest.TestCase): """Tests for the CollectConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessEqual(None, collect.collectMode) self.failUnlessEqual(None, collect.archiveMode) self.failUnlessEqual(None, collect.ignoreFile) self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", [], [], [], []) self.failUnlessEqual("/target", collect.targetDir) self.failUnlessEqual("incr", collect.collectMode) self.failUnlessEqual("tar", collect.archiveMode) self.failUnlessEqual("ignore", collect.ignoreFile) self.failUnlessEqual([], collect.absoluteExcludePaths) self.failUnlessEqual([], collect.excludePatterns) self.failUnlessEqual([], collect.collectDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failUnlessEqual("/target", collect.targetDir) self.failUnlessEqual("incr", collect.collectMode) self.failUnlessEqual("tar", collect.archiveMode) self.failUnlessEqual("ignore", collect.ignoreFile) self.failUnlessEqual(["/path", ], collect.absoluteExcludePaths) self.failUnlessEqual(["pattern", ], collect.excludePatterns) self.failUnlessEqual([CollectFile(), ], collect.collectFiles) self.failUnlessEqual([CollectDir(), ], collect.collectDirs) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ collect = CollectConfig(targetDir="/whatever") self.failUnlessEqual("/whatever", collect.targetDir) collect.targetDir = None self.failUnlessEqual(None, collect.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) collect.targetDir = "/whatever" self.failUnlessEqual("/whatever", collect.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "") self.failUnlessEqual(None, collect.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "bogus") self.failUnlessEqual(None, collect.targetDir) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ collect = CollectConfig(collectMode="incr") self.failUnlessEqual("incr", collect.collectMode) collect.collectMode = None self.failUnlessEqual(None, collect.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) collect.collectMode = "daily" self.failUnlessEqual("daily", collect.collectMode) collect.collectMode = "weekly" self.failUnlessEqual("weekly", collect.collectMode) collect.collectMode = "incr" self.failUnlessEqual("incr", collect.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "") self.failUnlessEqual(None, collect.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "periodic") self.failUnlessEqual(None, collect.collectMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, None value. """ collect = CollectConfig(archiveMode="tar") self.failUnlessEqual("tar", collect.archiveMode) collect.archiveMode = None self.failUnlessEqual(None, collect.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) collect.archiveMode = "tar" self.failUnlessEqual("tar", collect.archiveMode) collect.archiveMode = "targz" self.failUnlessEqual("targz", collect.archiveMode) collect.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collect.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "") self.failUnlessEqual(None, collect.archiveMode) def testConstructor_015(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "tarz") self.failUnlessEqual(None, collect.archiveMode) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, None value. """ collect = CollectConfig(ignoreFile="ignore") self.failUnlessEqual("ignore", collect.ignoreFile) collect.ignoreFile = None self.failUnlessEqual(None, collect.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.ignoreFile) collect.ignoreFile = "ignore" self.failUnlessEqual("ignore", collect.ignoreFile) def testConstructor_018(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.ignoreFile) self.failUnlessAssignRaises(ValueError, collect, "ignoreFile", "") self.failUnlessEqual(None, collect.ignoreFile) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collect = CollectConfig(absoluteExcludePaths=[]) self.failUnlessEqual([], collect.absoluteExcludePaths) collect.absoluteExcludePaths = None self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = [] self.failUnlessEqual([], collect.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/whatever", ] self.failUnlessEqual(["/whatever", ], collect.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/one", "/two", "/three", ] self.failUnlessEqual(["/one", "/two", "/three", ], collect.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", "/two", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, None value. """ collect = CollectConfig(excludePatterns=[]) self.failUnlessEqual([], collect.excludePatterns) collect.excludePatterns = None self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = [] self.failUnlessEqual([], collect.excludePatterns) def testConstructor_028(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern", ] self.failUnlessEqual(["pattern", ], collect.excludePatterns) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern1", "pattern2", ] self.failUnlessEqual(["pattern1", "pattern2", ], collect.excludePatterns) def testConstructor_029a(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_029b(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "*", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_029c(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "valid", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_030(self): """ Test assignment of collectDirs attribute, None value. """ collect = CollectConfig(collectDirs=[]) self.failUnlessEqual([], collect.collectDirs) collect.collectDirs = None self.failUnlessEqual(None, collect.collectDirs) def testConstructor_031(self): """ Test assignment of collectDirs attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [] self.failUnlessEqual([], collect.collectDirs) def testConstructor_032(self): """ Test assignment of collectDirs attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), ] self.failUnlessEqual([CollectDir(absolutePath="/one"), ], collect.collectDirs) def testConstructor_033(self): """ Test assignment of collectDirs attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ] self.failUnlessEqual([CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ], collect.collectDirs) def testConstructor_034(self): """ Test assignment of collectDirs attribute, single invalid entry (None). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ None, ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_035(self): """ Test assignment of collectDirs attribute, single invalid entry (not a CollectDir). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_036(self): """ Test assignment of collectDirs attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", CollectDir(), ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_037(self): """ Test assignment of collectFiles attribute, None value. """ collect = CollectConfig(collectFiles=[]) self.failUnlessEqual([], collect.collectFiles) collect.collectFiles = None self.failUnlessEqual(None, collect.collectFiles) def testConstructor_038(self): """ Test assignment of collectFiles attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [] self.failUnlessEqual([], collect.collectFiles) def testConstructor_039(self): """ Test assignment of collectFiles attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), ] self.failUnlessEqual([CollectFile(absolutePath="/one"), ], collect.collectFiles) def testConstructor_040(self): """ Test assignment of collectFiles attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ] self.failUnlessEqual([CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ], collect.collectFiles) def testConstructor_041(self): """ Test assignment of collectFiles attribute, single invalid entry (None). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ None, ]) self.failUnlessEqual(None, collect.collectFiles) def testConstructor_042(self): """ Test assignment of collectFiles attribute, single invalid entry (not a CollectFile). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", ]) self.failUnlessEqual(None, collect.collectFiles) def testConstructor_043(self): """ Test assignment of collectFiles attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", CollectFile(), ]) self.failUnlessEqual(None, collect.collectFiles) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collect1 = CollectConfig() collect2 = CollectConfig() self.failUnlessEqual(collect1, collect2) self.failUnless(collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(not collect1 != collect2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failUnlessEqual(collect1, collect2) self.failUnless(collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(not collect1 != collect2) def testComparison_003(self): """ Test comparison of two differing objects, targetDir differs (one None). """ collect1 = CollectConfig(None, "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs. """ collect1 = CollectConfig("/target1", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", None, "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collect1 = CollectConfig("/target", "daily", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collect1 = CollectConfig("/target", "incr", None, "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collect1 = CollectConfig("/target", "incr", "targz", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tarbz2", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_009(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", None, ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore1", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore2", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_011(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", "/path2", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_017(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_018(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", "bogus", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_019(self): """ Test comparison of two differing objects, collectDirs differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_020(self): """ Test comparison of two differing objects, collectDirs differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_021(self): """ Test comparison of two differing objects, collectDirs differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_022(self): """ Test comparison of two differing objects, collectDirs differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_023(self): """ Test comparison of two differing objects, collectFiles differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_024(self): """ Test comparison of two differing objects, collectFiles differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_025(self): """ Test comparison of two differing objects, collectFiles differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_026(self): """ Test comparison of two differing objects, collectFiles differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), CollectFile(), ], [CollectDir() ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) ######################## # TestStageConfig class ######################## class TestStageConfig(unittest.TestCase): """Tests for the StageConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StageConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessEqual(None, stage.localPeers) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ stage = StageConfig("/whatever", [], []) self.failUnlessEqual("/whatever", stage.targetDir) self.failUnlessEqual([], stage.localPeers) self.failUnlessEqual([], stage.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ stage = StageConfig("/whatever", [LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual("/whatever", stage.targetDir) self.failUnlessEqual([LocalPeer(), ], stage.localPeers) self.failUnlessEqual([RemotePeer(), ], stage.remotePeers) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ stage = StageConfig(targetDir="/whatever") self.failUnlessEqual("/whatever", stage.targetDir) stage.targetDir = None self.failUnlessEqual(None, stage.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) stage.targetDir = "/whatever" self.failUnlessEqual("/whatever", stage.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "") self.failUnlessEqual(None, stage.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "stuff") self.failUnlessEqual(None, stage.targetDir) def testConstructor_008(self): """ Test assignment of localPeers attribute, None value. """ stage = StageConfig(localPeers=[]) self.failUnlessEqual([], stage.localPeers) stage.localPeers = None self.failUnlessEqual(None, stage.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, empty list. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [] self.failUnlessEqual([], stage.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, single valid entry. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(), ] self.failUnlessEqual([LocalPeer(), ], stage.localPeers) def testConstructor_011(self): """ Test assignment of localPeers attribute, multiple valid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.failUnlessEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], stage.localPeers) def testConstructor_012(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [None, ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_013(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [RemotePeer(), ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_014(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, None value. """ stage = StageConfig(remotePeers=[]) self.failUnlessEqual([], stage.remotePeers) stage.remotePeers = None self.failUnlessEqual(None, stage.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, empty list. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [] self.failUnlessEqual([], stage.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, single valid entry. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), ] self.failUnlessEqual([RemotePeer(name="one"), ], stage.remotePeers) def testConstructor_018(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.failUnlessEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], stage.remotePeers) def testConstructor_019(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [None, ]) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_020(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), ]) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_021(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, stage.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ stage1 = StageConfig() stage2 = StageConfig() self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ stage1 = StageConfig("/target", [], []) stage2 = StageConfig("/target", [], []) self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs (one None). """ stage1 = StageConfig() stage2 = StageConfig(targetDir="/whatever") self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_005(self): """ Test comparison of two differing objects, targetDir differs. """ stage1 = StageConfig("/target1", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target2", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_008(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_009(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(name="one"), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(name="two"), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], []) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_012(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], []) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_013(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="two"), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="one"), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(not stage1 <= stage2) self.failUnless(stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(stage1 != stage2) ######################## # TestStoreConfig class ######################## class TestStoreConfig(unittest.TestCase): """Tests for the StoreConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StoreConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessEqual(None, store.mediaType) self.failUnlessEqual(None, store.deviceType) self.failUnlessEqual(None, store.devicePath) self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessEqual(None, store.driveSpeed) self.failUnlessEqual(False, store.checkData) self.failUnlessEqual(False, store.checkMedia) self.failUnlessEqual(False, store.warnMidnite) self.failUnlessEqual(False, store.noEject) self.failUnlessEqual(None, store.blankBehavior) self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessEqual(None, store.ejectDelay) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior("weekly", "1.3") store = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior, 12, 13) self.failUnlessEqual("/source", store.sourceDir) self.failUnlessEqual("cdr-74", store.mediaType) self.failUnlessEqual("cdwriter", store.deviceType) self.failUnlessEqual("/dev/cdrw", store.devicePath) self.failUnlessEqual("0,0,0", store.deviceScsiId) self.failUnlessEqual(4, store.driveSpeed) self.failUnlessEqual(True, store.checkData) self.failUnlessEqual(True, store.checkMedia) self.failUnlessEqual(True, store.warnMidnite) self.failUnlessEqual(True, store.noEject) self.failUnlessEqual(behavior, store.blankBehavior) self.failUnlessEqual(12, store.refreshMediaDelay) self.failUnlessEqual(13, store.ejectDelay) def testConstructor_003(self): """ Test assignment of sourceDir attribute, None value. """ store = StoreConfig(sourceDir="/whatever") self.failUnlessEqual("/whatever", store.sourceDir) store.sourceDir = None self.failUnlessEqual(None, store.sourceDir) def testConstructor_004(self): """ Test assignment of sourceDir attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) store.sourceDir = "/whatever" self.failUnlessEqual("/whatever", store.sourceDir) def testConstructor_005(self): """ Test assignment of sourceDir attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "") self.failUnlessEqual(None, store.sourceDir) def testConstructor_006(self): """ Test assignment of sourceDir attribute, invalid value (non-absolute). """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "bogus") self.failUnlessEqual(None, store.sourceDir) def testConstructor_007(self): """ Test assignment of mediaType attribute, None value. """ store = StoreConfig(mediaType="cdr-74") self.failUnlessEqual("cdr-74", store.mediaType) store.mediaType = None self.failUnlessEqual(None, store.mediaType) def testConstructor_008(self): """ Test assignment of mediaType attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) store.mediaType = "cdr-74" self.failUnlessEqual("cdr-74", store.mediaType) store.mediaType = "cdrw-74" self.failUnlessEqual("cdrw-74", store.mediaType) store.mediaType = "cdr-80" self.failUnlessEqual("cdr-80", store.mediaType) store.mediaType = "cdrw-80" self.failUnlessEqual("cdrw-80", store.mediaType) store.mediaType = "dvd+r" self.failUnlessEqual("dvd+r", store.mediaType) store.mediaType = "dvd+rw" self.failUnlessEqual("dvd+rw", store.mediaType) def testConstructor_009(self): """ Test assignment of mediaType attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "") self.failUnlessEqual(None, store.mediaType) def testConstructor_010(self): """ Test assignment of mediaType attribute, invalid value (not in list). """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "floppy") self.failUnlessEqual(None, store.mediaType) def testConstructor_011(self): """ Test assignment of deviceType attribute, None value. """ store = StoreConfig(deviceType="cdwriter") self.failUnlessEqual("cdwriter", store.deviceType) store.deviceType = None self.failUnlessEqual(None, store.deviceType) def testConstructor_012(self): """ Test assignment of deviceType attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) store.deviceType = "cdwriter" self.failUnlessEqual("cdwriter", store.deviceType) store.deviceType = "dvdwriter" self.failUnlessEqual("dvdwriter", store.deviceType) def testConstructor_013(self): """ Test assignment of deviceType attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "") self.failUnlessEqual(None, store.deviceType) def testConstructor_014(self): """ Test assignment of deviceType attribute, invalid value (not in list). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "ftape") self.failUnlessEqual(None, store.deviceType) def testConstructor_015(self): """ Test assignment of devicePath attribute, None value. """ store = StoreConfig(devicePath="/dev/cdrw") self.failUnlessEqual("/dev/cdrw", store.devicePath) store.devicePath = None self.failUnlessEqual(None, store.devicePath) def testConstructor_016(self): """ Test assignment of devicePath attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) store.devicePath = "/dev/cdrw" self.failUnlessEqual("/dev/cdrw", store.devicePath) def testConstructor_017(self): """ Test assignment of devicePath attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "") self.failUnlessEqual(None, store.devicePath) def testConstructor_018(self): """ Test assignment of devicePath attribute, invalid value (non-absolute). """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "dev/cdrw") self.failUnlessEqual(None, store.devicePath) def testConstructor_019(self): """ Test assignment of deviceScsiId attribute, None value. """ store = StoreConfig(deviceScsiId="0,0,0") self.failUnlessEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = None self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_020(self): """ Test assignment of deviceScsiId attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) store.deviceScsiId = "0,0,0" self.failUnlessEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = "ATA:0,0,0" self.failUnlessEqual("ATA:0,0,0", store.deviceScsiId) def testConstructor_021(self): """ Test assignment of deviceScsiId attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "") self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_022(self): """ Test assignment of deviceScsiId attribute, invalid value (invalid id). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATA;0,0,0") self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATAPI-0,0,0") self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "1:2:3") self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_023(self): """ Test assignment of driveSpeed attribute, None value. """ store = StoreConfig(driveSpeed=4) self.failUnlessEqual(4, store.driveSpeed) store.driveSpeed = None self.failUnlessEqual(None, store.driveSpeed) def testConstructor_024(self): """ Test assignment of driveSpeed attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.driveSpeed) store.driveSpeed = 4 self.failUnlessEqual(4, store.driveSpeed) store.driveSpeed = "12" self.failUnlessEqual(12, store.driveSpeed) def testConstructor_025(self): """ Test assignment of driveSpeed attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", "blech") self.failUnlessEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", CollectDir()) self.failUnlessEqual(None, store.driveSpeed) def testConstructor_026(self): """ Test assignment of checkData attribute, None value. """ store = StoreConfig(checkData=True) self.failUnlessEqual(True, store.checkData) store.checkData = None self.failUnlessEqual(False, store.checkData) def testConstructor_027(self): """ Test assignment of checkData attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.checkData) store.checkData = True self.failUnlessEqual(True, store.checkData) store.checkData = False self.failUnlessEqual(False, store.checkData) def testConstructor_028(self): """ Test assignment of checkData attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.checkData) store.checkData = 0 self.failUnlessEqual(False, store.checkData) store.checkData = [] self.failUnlessEqual(False, store.checkData) store.checkData = None self.failUnlessEqual(False, store.checkData) store.checkData = ['a'] self.failUnlessEqual(True, store.checkData) store.checkData = 3 self.failUnlessEqual(True, store.checkData) def testConstructor_029(self): """ Test assignment of warnMidnite attribute, None value. """ store = StoreConfig(warnMidnite=True) self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = None self.failUnlessEqual(False, store.warnMidnite) def testConstructor_030(self): """ Test assignment of warnMidnite attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = True self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = False self.failUnlessEqual(False, store.warnMidnite) def testConstructor_031(self): """ Test assignment of warnMidnite attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = 0 self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = [] self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = None self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = ['a'] self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = 3 self.failUnlessEqual(True, store.warnMidnite) def testConstructor_032(self): """ Test assignment of noEject attribute, None value. """ store = StoreConfig(noEject=True) self.failUnlessEqual(True, store.noEject) store.noEject = None self.failUnlessEqual(False, store.noEject) def testConstructor_033(self): """ Test assignment of noEject attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.noEject) store.noEject = True self.failUnlessEqual(True, store.noEject) store.noEject = False self.failUnlessEqual(False, store.noEject) def testConstructor_034(self): """ Test assignment of noEject attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.noEject) store.noEject = 0 self.failUnlessEqual(False, store.noEject) store.noEject = [] self.failUnlessEqual(False, store.noEject) store.noEject = None self.failUnlessEqual(False, store.noEject) store.noEject = ['a'] self.failUnlessEqual(True, store.noEject) store.noEject = 3 self.failUnlessEqual(True, store.noEject) def testConstructor_035(self): """ Test assignment of checkMedia attribute, None value. """ store = StoreConfig(checkMedia=True) self.failUnlessEqual(True, store.checkMedia) store.checkMedia = None self.failUnlessEqual(False, store.checkMedia) def testConstructor_036(self): """ Test assignment of checkMedia attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.checkMedia) store.checkMedia = True self.failUnlessEqual(True, store.checkMedia) store.checkMedia = False self.failUnlessEqual(False, store.checkMedia) def testConstructor_037(self): """ Test assignment of checkMedia attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.checkMedia) store.checkMedia = 0 self.failUnlessEqual(False, store.checkMedia) store.checkMedia = [] self.failUnlessEqual(False, store.checkMedia) store.checkMedia = None self.failUnlessEqual(False, store.checkMedia) store.checkMedia = ['a'] self.failUnlessEqual(True, store.checkMedia) store.checkMedia = 3 self.failUnlessEqual(True, store.checkMedia) def testConstructor_038(self): """ Test assignment of blankBehavior attribute, None value. """ store = StoreConfig() store.blankBehavior = None self.failUnlessEqual(None, store.blankBehavior) def testConstructor_039(self): """ Test assignment of blankBehavior store attribute, valid value. """ store = StoreConfig() store.blankBehavior = BlankBehavior() self.failUnlessEqual(BlankBehavior(), store.blankBehavior) def testConstructor_040(self): """ Test assignment of blankBehavior store attribute, invalid value (not BlankBehavior). """ store = StoreConfig() self.failUnlessAssignRaises(ValueError, store, "blankBehavior", CollectDir()) def testConstructor_041(self): """ Test assignment of refreshMediaDelay attribute, None value. """ store = StoreConfig(refreshMediaDelay=4) self.failUnlessEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = None self.failUnlessEqual(None, store.refreshMediaDelay) def testConstructor_042(self): """ Test assignment of refreshMediaDelay attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 4 self.failUnlessEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = "12" self.failUnlessEqual(12, store.refreshMediaDelay) store.refreshMediaDelay = "0" self.failUnlessEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 0 self.failUnlessEqual(None, store.refreshMediaDelay) def testConstructor_043(self): """ Test assignment of refreshMediaDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", "blech") self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", CollectDir()) self.failUnlessEqual(None, store.refreshMediaDelay) def testConstructor_044(self): """ Test assignment of ejectDelay attribute, None value. """ store = StoreConfig(ejectDelay=4) self.failUnlessEqual(4, store.ejectDelay) store.ejectDelay = None self.failUnlessEqual(None, store.ejectDelay) def testConstructor_045(self): """ Test assignment of ejectDelay attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.ejectDelay) store.ejectDelay = 4 self.failUnlessEqual(4, store.ejectDelay) store.ejectDelay = "12" self.failUnlessEqual(12, store.ejectDelay) store.ejectDelay = "0" self.failUnlessEqual(None, store.ejectDelay) store.ejectDelay = 0 self.failUnlessEqual(None, store.ejectDelay) def testConstructor_046(self): """ Test assignment of ejectDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", "blech") self.failUnlessEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", CollectDir()) self.failUnlessEqual(None, store.ejectDelay) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ store1 = StoreConfig() store2 = StoreConfig() self.failUnlessEqual(store1, store2) self.failUnless(store1 == store2) self.failUnless(not store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(store1 >= store2) self.failUnless(not store1 != store2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failUnlessEqual(store1, store2) self.failUnless(store1 == store2) self.failUnless(not store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(store1 >= store2) self.failUnless(not store1 != store2) def testComparison_003(self): """ Test comparison of two differing objects, sourceDir differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(sourceDir="/whatever") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_004(self): """ Test comparison of two differing objects, sourceDir differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source1", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source2", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_005(self): """ Test comparison of two differing objects, mediaType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(mediaType="cdr-74") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_006(self): """ Test comparison of two differing objects, mediaType differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdrw-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(not store1 < store2) self.failUnless(not store1 <= store2) self.failUnless(store1 > store2) self.failUnless(store1 >= store2) self.failUnless(store1 != store2) def testComparison_007(self): """ Test comparison of two differing objects, deviceType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceType="cdwriter") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_008(self): """ Test comparison of two differing objects, devicePath differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(devicePath="/dev/cdrw") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_009(self): """ Test comparison of two differing objects, devicePath differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/hdd", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_010(self): """ Test comparison of two differing objects, deviceScsiId differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceScsiId="0,0,0") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_011(self): """ Test comparison of two differing objects, deviceScsiId differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "ATA:0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_012(self): """ Test comparison of two differing objects, driveSpeed differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(driveSpeed=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_013(self): """ Test comparison of two differing objects, driveSpeed differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_014(self): """ Test comparison of two differing objects, checkData differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, False, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_015(self): """ Test comparison of two differing objects, warnMidnite differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, False, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_016(self): """ Test comparison of two differing objects, noEject differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, False, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_017(self): """ Test comparison of two differing objects, checkMedia differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, False, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_018(self): """ Test comparison of two differing objects, blankBehavior differs (one None). """ behavior = BlankBehavior() store1 = StoreConfig() store2 = StoreConfig(blankBehavior=behavior) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_019(self): """ Test comparison of two differing objects, blankBehavior differs. """ behavior1 = BlankBehavior("daily", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_020(self): """ Test comparison of two differing objects, refreshMediaDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(refreshMediaDelay=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_021(self): """ Test comparison of two differing objects, refreshMediaDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 1, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_022(self): """ Test comparison of two differing objects, ejectDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(ejectDelay=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_023(self): """ Test comparison of two differing objects, ejectDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 1) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) ######################## # TestPurgeConfig class ######################## class TestPurgeConfig(unittest.TestCase): """Tests for the PurgeConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list). """ purge = PurgeConfig([]) self.failUnlessEqual([], purge.purgeDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list). """ purge = PurgeConfig([PurgeDir(), ]) self.failUnlessEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_004(self): """ Test assignment of purgeDirs attribute, None value. """ purge = PurgeConfig([]) self.failUnlessEqual([], purge.purgeDirs) purge.purgeDirs = None self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_005(self): """ Test assignment of purgeDirs attribute, [] value. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [] self.failUnlessEqual([], purge.purgeDirs) def testConstructor_006(self): """ Test assignment of purgeDirs attribute, single valid entry. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir(), ] self.failUnlessEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_007(self): """ Test assignment of purgeDirs attribute, multiple valid entries. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir("/one"), PurgeDir("/two"), ] self.failUnlessEqual([PurgeDir("/one"), PurgeDir("/two"), ], purge.purgeDirs) def testConstructor_009(self): """ Test assignment of purgeDirs attribute, single invalid entry (not a PurgeDir). """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ RemotePeer(), ]) self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_010(self): """ Test assignment of purgeDirs attribute, mixed valid and invalid entries. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ PurgeDir(), RemotePeer(), ]) self.failUnlessEqual(None, purge.purgeDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purge1 = PurgeConfig() purge2 = PurgeConfig() self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([]) self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ purge1 = PurgeConfig([PurgeDir(), ]) purge2 = PurgeConfig([PurgeDir(), ]) self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_004(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_005(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one not empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([PurgeDir(), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_006(self): """ Test comparison of two differing objects, purgeDirs differs (one empty, one not empty). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([PurgeDir(), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_007(self): """ Test comparison of two differing objects, purgeDirs differs (both not empty). """ purge1 = PurgeConfig([PurgeDir("/two"), ]) purge2 = PurgeConfig([PurgeDir("/one"), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(not purge1 <= purge2) self.failUnless(purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(purge1 != purge2) ################### # TestConfig class ################### class TestConfig(unittest.TestCase): """Tests for the Config class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Config() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = Config(validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = Config(validate=True) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["cback.conf.2"] contents = open(path).read() self.failUnlessRaises(ValueError, Config, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test with empty config document as data, validate=False. """ path = self.resources["cback.conf.2"] contents = open(path).read() config = Config(xmlData=contents, validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_005(self): """ Test with empty config document in a file, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_006(self): """ Test assignment of reference attribute, None value. """ config = Config() config.reference = None self.failUnlessEqual(None, config.reference) def testConstructor_007(self): """ Test assignment of reference attribute, valid value. """ config = Config() config.reference = ReferenceConfig() self.failUnlessEqual(ReferenceConfig(), config.reference) def testConstructor_008(self): """ Test assignment of reference attribute, invalid value (not ReferenceConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "reference", CollectDir()) def testConstructor_009(self): """ Test assignment of extensions attribute, None value. """ config = Config() config.extensions = None self.failUnlessEqual(None, config.extensions) def testConstructor_010(self): """ Test assignment of extensions attribute, valid value. """ config = Config() config.extensions = ExtensionsConfig() self.failUnlessEqual(ExtensionsConfig(), config.extensions) def testConstructor_011(self): """ Test assignment of extensions attribute, invalid value (not ExtensionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "extensions", CollectDir()) def testConstructor_012(self): """ Test assignment of options attribute, None value. """ config = Config() config.options = None self.failUnlessEqual(None, config.options) def testConstructor_013(self): """ Test assignment of options attribute, valid value. """ config = Config() config.options = OptionsConfig() self.failUnlessEqual(OptionsConfig(), config.options) def testConstructor_014(self): """ Test assignment of options attribute, invalid value (not OptionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "options", CollectDir()) def testConstructor_015(self): """ Test assignment of collect attribute, None value. """ config = Config() config.collect = None self.failUnlessEqual(None, config.collect) def testConstructor_016(self): """ Test assignment of collect attribute, valid value. """ config = Config() config.collect = CollectConfig() self.failUnlessEqual(CollectConfig(), config.collect) def testConstructor_017(self): """ Test assignment of collect attribute, invalid value (not CollectConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "collect", CollectDir()) def testConstructor_018(self): """ Test assignment of stage attribute, None value. """ config = Config() config.stage = None self.failUnlessEqual(None, config.stage) def testConstructor_019(self): """ Test assignment of stage attribute, valid value. """ config = Config() config.stage = StageConfig() self.failUnlessEqual(StageConfig(), config.stage) def testConstructor_020(self): """ Test assignment of stage attribute, invalid value (not StageConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "stage", CollectDir()) def testConstructor_021(self): """ Test assignment of store attribute, None value. """ config = Config() config.store = None self.failUnlessEqual(None, config.store) def testConstructor_022(self): """ Test assignment of store attribute, valid value. """ config = Config() config.store = StoreConfig() self.failUnlessEqual(StoreConfig(), config.store) def testConstructor_023(self): """ Test assignment of store attribute, invalid value (not StoreConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "store", CollectDir()) def testConstructor_024(self): """ Test assignment of purge attribute, None value. """ config = Config() config.purge = None self.failUnlessEqual(None, config.purge) def testConstructor_025(self): """ Test assignment of purge attribute, valid value. """ config = Config() config.purge = PurgeConfig() self.failUnlessEqual(PurgeConfig(), config.purge) def testConstructor_026(self): """ Test assignment of purge attribute, invalid value (not PurgeConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "purge", CollectDir()) def testConstructor_027(self): """ Test assignment of peers attribute, None value. """ config = Config() config.peers = None self.failUnlessEqual(None, config.peers) def testConstructor_028(self): """ Test assignment of peers attribute, valid value. """ config = Config() config.peers = PeersConfig() self.failUnlessEqual(PeersConfig(), config.peers) def testConstructor_029(self): """ Test assignment of peers attribute, invalid value (not PeersConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "peers", CollectDir()) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = Config() config2 = Config() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, reference differs (one None). """ config1 = Config() config2 = Config() config2.reference = ReferenceConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, reference differs. """ config1 = Config() config1.reference = ReferenceConfig(author="one") config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig(author="two") config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_005(self): """ Test comparison of two differing objects, extensions differs (one None). """ config1 = Config() config2 = Config() config2.extensions = ExtensionsConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_006(self): """ Test comparison of two differing objects, extensions differs (one list empty, one None). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig(None) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_007(self): """ Test comparison of two differing objects, extensions differs (one list empty, one not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_008(self): """ Test comparison of two differing objects, extensions differs (both lists not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "four"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_009(self): """ Test comparison of two differing objects, options differs (one None). """ config1 = Config() config2 = Config() config2.options = OptionsConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_010(self): """ Test comparison of two differing objects, options differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig(startingDay="tuesday") config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig(startingDay="monday") config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_011(self): """ Test comparison of two differing objects, collect differs (one None). """ config1 = Config() config2 = Config() config2.collect = CollectConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_012(self): """ Test comparison of two differing objects, collect differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig(collectMode="daily") config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig(collectMode="incr") config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_013(self): """ Test comparison of two differing objects, stage differs (one None). """ config1 = Config() config2 = Config() config2.stage = StageConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_014(self): """ Test comparison of two differing objects, stage differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig(targetDir="/something") config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig(targetDir="/whatever") config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_015(self): """ Test comparison of two differing objects, store differs (one None). """ config1 = Config() config2 = Config() config2.store = StoreConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_016(self): """ Test comparison of two differing objects, store differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig(deviceScsiId="ATA:0,0,0") config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig(deviceScsiId="0,0,0") config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_017(self): """ Test comparison of two differing objects, purge differs (one None). """ config1 = Config() config2 = Config() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_018(self): """ Test comparison of two differing objects, purge differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig(purgeDirs=None) config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig(purgeDirs=[]) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_019(self): """ Test comparison of two differing objects, peers differs (one None). """ config1 = Config() config2 = Config() config2.peers = PeersConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_020(self): """ Test comparison of two identical objects, peers differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig(localPeers=[LocalPeer(), ]) config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on an empty reference section. """ config = Config() config.reference = ReferenceConfig() config._validateReference() def testValidate_002(self): """ Test validate on a non-empty reference section, with everything filled in. """ config = Config() config.reference = ReferenceConfig("author", "revision", "description", "generator") config._validateReference() def testValidate_003(self): """ Test validate on an empty extensions section, with a None list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = None config._validateExtensions() def testValidate_004(self): """ Test validate on an empty extensions section, with [] for the list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [] config._validateExtensions() def testValidate_005(self): """ Test validate on an a extensions section, with one empty extended action. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_006(self): """ Test validate on an a extensions section, with one extended action that has only a name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(name="name"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_007(self): """ Test validate on an a extensions section, with one extended action that has only a module. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(module="module"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_008(self): """ Test validate on an a extensions section, with one extended action that has only a function. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(function="function"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_009(self): """ Test validate on an a extensions section, with one extended action that has only an index. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(index=12), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_010(self): """ Test validate on an a extensions section, with one extended action that makes sense, index order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("one", "two", "three", 100) ] config._validateExtensions() def testValidate_011(self): """ Test validate on an a extensions section, with one extended action that makes sense, dependency order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("one", "two", "three", dependencies=ActionDependencies()) ] config._validateExtensions() def testValidate_012(self): """ Test validate on an a extensions section, with several extended actions that make sense for various kinds of order modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() def testValidate_012a(self): """ Test validate on an a extensions section, with several extended actions that don't have the proper ordering modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", 100), ExtendedAction("e", "f", "g", 12), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 12), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", 12), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_013(self): """ Test validate on an empty options section. """ config = Config() config.options = OptionsConfig() self.failUnlessRaises(ValueError, config._validateOptions) def testValidate_014(self): """ Test validate on a non-empty options section, with everything filled in. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() def testValidate_015(self): """ Test validate on a non-empty options section, with individual items missing. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.startingDay = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.workingDir = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupUser = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupGroup = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.rcpCommand = None self.failUnlessRaises(ValueError, config._validateOptions) def testValidate_016(self): """ Test validate on an empty collect section. """ config = Config() config.collect = CollectConfig() self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_017(self): """ Test validate on collect section containing only targetDir. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config._validateCollect() # we no longer validate that at least one file or dir is required here def testValidate_018(self): """ Test validate on collect section containing only targetDir and one collectDirs entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_018a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_019(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_019a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_020(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with path, collect mode, archive mode and ignore file. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i"), ] config._validateCollect() def testValidate_020a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with path, collect mode and archive mode. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="tar"), ] config._validateCollect() def testValidate_021(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] config._validateCollect() def testValidate_021a(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] config._validateCollect() def testValidate_022(self): """ Test validate on collect section containing targetDir, but with collect mode, archive mode and ignore file mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", ignoreFile="i"), ] config._validateCollect() config.collect.collectDirs.append(CollectDir(absolutePath="/stuff2")) self.failUnlessRaises(ValueError, config._validateCollect) config.collect.collectDirs[-1].collectMode = "daily" config._validateCollect() def testValidate_022a(self): """ Test validate on collect section containing targetDir, but with collect mode, and archive mode mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="targz"), ] config._validateCollect() config.collect.collectFiles.append(CollectFile(absolutePath="/stuff2")) self.failUnlessRaises(ValueError, config._validateCollect) config.collect.collectFiles[-1].collectMode = "daily" config._validateCollect() def testValidate_023(self): """ Test validate on an empty stage section. """ config = Config() config.stage = StageConfig() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_024(self): """ Test validate on stage section containing only targetDir and None for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None self.failUnlessRaises(ValueError, config._validateStage) def testValidate_025(self): """ Test validate on stage section containing only targetDir and [] for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_026(self): """ Test validate on stage section containing targetDir and one local peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_027(self): """ Test validate on stage section containing targetDir and one local peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_028(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, None for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = None config._validateStage() def testValidate_029(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, [] for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = [] config._validateStage() def testValidate_030(self): """ Test validate on stage section containing targetDir and one remote peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_031(self): """ Test validate on stage section containing targetDir and one remote peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(name="blech"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_032(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, None for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_033(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, [] for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_034(self): """ Test validate on stage section containing targetDir and one remote and one local peer. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_035(self): """ Test validate on stage section containing targetDir multiple remote and local peers. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[0].remoteUser = "remote" config.stage.remotePeers[0].rcpCommand = "command" config._validateStage() def testValidate_036(self): """ Test validate on an empty store section. """ config = Config() config.store = StoreConfig() self.failUnlessRaises(ValueError, config._validateStore) def testValidate_037(self): """ Test validate on store section with everything filled in. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_038(self): """ Test validate on store section missing one each of required fields. """ config = Config() config.store = StoreConfig() config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) def testValidate_039(self): """ Test validate on store section missing one each of device type, drive speed and capacity mode and the booleans. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_039a(self): """ Test validate on store section with everything filled in, but mismatch device/media. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) def testValidate_040(self): """ Test validate on an empty purge section, with a None list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = None config._validatePurge() def testValidate_041(self): """ Test validate on an empty purge section, with [] for the list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [] config._validatePurge() def testValidate_042(self): """ Test validate on an a purge section, with one empty purge dir. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_043(self): """ Test validate on an a purge section, with one purge dir that has only a path. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(absolutePath="/whatever"), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_044(self): """ Test validate on an a purge section, with one purge dir that has only retain days. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(retainDays=3), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_045(self): """ Test validate on an a purge section, with one purge dir that makes sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir(absolutePath="/whatever", retainDays=4), ] config._validatePurge() def testValidate_046(self): """ Test validate on an a purge section, with several purge dirs that make sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir("/whatever", 4), PurgeDir("/etc/different", 12), ] config._validatePurge() def testValidate_047(self): """ Test that we catch a duplicate extended action name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("unique1", "b", "c", dependencies=ActionDependencies()), ExtendedAction("unique2", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() config.extensions.actions = [ ExtendedAction("duplicate", "b", "c", dependencies=ActionDependencies()), ExtendedAction("duplicate", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_048(self): """ Test that we catch a duplicate local peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_049(self): """ Test that we catch a duplicate remote peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_050(self): """ Test that we catch a duplicate peer name duplicated between remote and local in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_051(self): """ Test validate on a None peers section. """ config = Config() config.peers = None config._validatePeers() def testValidate_052(self): """ Test validate on an empty peers section. """ config = Config() config.peers = PeersConfig() self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_053(self): """ Test validate on peers section containing None for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = None self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_054(self): """ Test validate on peers section containing [] for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_055(self): """ Test validate on peers section containing one local peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_056(self): """ Test validate on peers section containing local peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_057(self): """ Test validate on peers section containing one local peer with a name and path, None for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = None config._validatePeers() def testValidate_058(self): """ Test validate on peers section containing one local peer with a name and path, [] for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = [] config._validatePeers() def testValidate_059(self): """ Test validate on peers section containing one remote peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_060(self): """ Test validate on peers section containing one remote peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(name="blech"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_061(self): """ Test validate on peers section containing one remote peer with a name and path, None for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_062(self): """ Test validate on peers section containing one remote peer with a name and path, [] for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_063(self): """ Test validate on peers section containing one remote and one local peer. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_064(self): """ Test validate on peers section containing multiple remote and local peers. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "remote" config.peers.remotePeers[0].rcpCommand = "command" config._validatePeers() def testValidate_065(self): """ Test that we catch a duplicate local peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_066(self): """ Test that we catch a duplicate remote peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_067(self): """ Test that we catch a duplicate peer name duplicated between remote and local in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_068(self): """ Test that stage peers can be None, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None config._validatePeers() config._validateStage() def testValidate_069(self): """ Test that stage peers can be empty lists, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] config._validatePeers() config._validateStage() def testValidate_070(self): """ Test that staging local peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [] config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_071(self): """ Test that staging remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_072(self): """ Test that staging local and remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_073(self): """ Confirm that remote peer is required to have backup user if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.backupUser = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "ken" config._validatePeers() def testValidate_074(self): """ Confirm that remote peer is required to have rcp command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rcpCommand = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rcpCommand = "rcp" config._validatePeers() def testValidate_075(self): """ Confirm that remote managed peer is required to have rsh command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rshCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rshCommand = "rsh" config._validatePeers() def testValidate_076(self): """ Confirm that remote managed peer is required to have cback command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.cbackCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].cbackCommand = "cback" config._validatePeers() def testValidate_077(self): """ Confirm that remote managed peer is required to have managed actions list if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.managedActions = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.options.managedActions = [] self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].managedActions = ["collect", ] config._validatePeers() def testValidate_078(self): """ Test case where dereference is True but link depth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=True), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_079(self): """ Test case where dereference is True but link depth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=True), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_080(self): """ Test case where dereference is False and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=False), ] config._validateCollect() def testValidate_081(self): """ Test case where dereference is None and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=None), ] config._validateCollect() def testValidate_082(self): """ Test case where dereference is False and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=False), ] config._validateCollect() def testValidate_083(self): """ Test case where dereference is None and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=None), ] config._validateCollect() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) expected = Config() self.failUnlessEqual(expected, config) def testParse_002(self): """ Parse empty config document, validate=True. """ path = self.resources["cback.conf.2"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_003(self): """ Parse config document containing only a reference section, containing only required fields, validate=False. """ path = self.resources["cback.conf.3"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig() self.failUnlessEqual(expected, config) def testParse_004(self): """ Parse config document containing only a reference section, containing only required fields, validate=True. """ path = self.resources["cback.conf.3"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_005(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.4"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") self.failUnlessEqual(expected, config) def testParse_006(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.4"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_007(self): """ Parse config document containing only a extensions section, containing only required fields, validate=False. """ path = self.resources["cback.conf.16"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.failUnlessEqual(expected, config) def testParse_008(self): """ Parse config document containing only a extensions section, containing only required fields, validate=True. """ path = self.resources["cback.conf.16"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_009(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=False. """ path = self.resources["cback.conf.18"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.failUnlessEqual(expected, config) def testParse_009a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=False. """ path = self.resources["cback.conf.19"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("sysinfo", "CedarBackup2.extend.sysinfo", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("mysql", "CedarBackup2.extend.mysql", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("postgresql", "CedarBackup2.extend.postgresql", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ]))) expected.extensions.actions.append(ExtendedAction("subversion", "CedarBackup2.extend.subversion", "executeAction", index=None, dependencies=ActionDependencies(afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("mbox", "CedarBackup2.extend.mbox", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ], afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("encrypt", "CedarBackup2.extend.encrypt", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", "d", ], afterList=["one", "two", "three", "four", "five", "six", "seven", "eight", ]))) self.failUnlessEqual(expected, config) def testParse_010(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=True. """ path = self.resources["cback.conf.18"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_010a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=True. """ path = self.resources["cback.conf.19"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_011(self): """ Parse config document containing only an options section, containing only required fields, validate=False. """ path = self.resources["cback.conf.5"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B") self.failUnlessEqual(expected, config) def testParse_012(self): """ Parse config document containing only an options section, containing only required fields, validate=True. """ path = self.resources["cback.conf.5"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_013(self): """ Parse config document containing only an options section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.6"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] self.failUnlessEqual(expected, config) def testParse_014(self): """ Parse config document containing only an options section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.6"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_015(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectDirs = [CollectDir(absolutePath="/etc"), ] self.failUnlessEqual(expected, config) def testParse_015a(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect file.) """ path = self.resources["cback.conf.17"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectFiles = [CollectFile(absolutePath="/etc"), ] self.failUnlessEqual(expected, config) def testParse_016(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_016a(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect file.) """ path = self.resources["cback.conf.17"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_017(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.8"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root", recursionLevel=1)) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) self.failUnlessEqual(expected, config) def testParse_018(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.8"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_019(self): """ Parse config document containing only a stage section, containing only required fields, validate=False. """ path = self.resources["cback.conf.9"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.failUnlessEqual(expected, config) def testParse_020(self): """ Parse config document containing only a stage section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_021(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.10"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) self.failUnlessEqual(expected, config) def testParse_022(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.10"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_023(self): """ Parse config document containing only a store section, containing only required fields, validate=False. """ path = self.resources["cback.conf.11"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig("/opt/backup/staging", mediaType="cdrw-74", devicePath="/dev/cdrw", deviceScsiId=None) self.failUnlessEqual(expected, config) def testParse_024(self): """ Parse config document containing only a store section, containing only required fields, validate=True. """ path = self.resources["cback.conf.11"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_025(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.12"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.refreshMediaDelay = 12 expected.store.ejectDelay = 13 expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" self.failUnlessEqual(expected, config) def testParse_026(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.12"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_027(self): """ Parse config document containing only a purge section, containing only required fields, validate=False. """ path = self.resources["cback.conf.13"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [PurgeDir("/opt/backup/stage", 5), ] self.failUnlessEqual(expected, config) def testParse_028(self): """ Parse config document containing only a purge section, containing only required fields, validate=True. """ path = self.resources["cback.conf.13"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_029(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.14"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_030(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.14"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_031(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_031a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_032(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_032a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_033(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.failUnlessEqual(expected, config) def testParse_034(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.failUnlessEqual(expected, config) def testParse_035(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=False. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_036(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=True. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", ".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ ".*\.doc\.*", ".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_037(self): """ Parse config document containing only a peers section, containing only required fields, validate=False. """ path = self.resources["cback.conf.22"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = None expected.peers.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.failUnlessEqual(expected, config) def testParse_038(self): """ Parse config document containing only a peers section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_039(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.23"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) self.failUnlessEqual(expected, config) def testParse_040(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.23"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) ######################### # Test the extract logic ######################### def testExtractXml_001(self): """ Extract empty config document, validate=True. """ before = Config() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_002(self): """ Extract empty config document, validate=False. """ before = Config() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_003(self): """ Extract document containing only a valid reference section, validate=True. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_004(self): """ Extract document containing only a valid reference section, validate=False. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_005(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = None before.extensions.actions = [] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="index", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006a(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="dependency", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "dependency" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", dependencies=ActionDependencies(beforeList=["b", ], afterList=["a", ]))) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_007(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=False. """ before = Config() before.extensions = ExtensionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_008(self): """ Extract document containing only a valid extensions section, orderMode="index", validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_009(self): """ Extract document containing only an invalid extensions section, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_010(self): """ Extract document containing only an invalid extensions section, validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_011(self): """ Extract document containing only a valid options section, validate=True. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_012(self): """ Extract document containing only a valid options section, validate=False. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_013(self): """ Extract document containing only an invalid options section, validate=True. """ before = Config() before.options = OptionsConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_014(self): """ Extract document containing only an invalid options section, validate=False. """ before = Config() before.options = OptionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_015(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_015a(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_016(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_016a(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_017(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_017a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_018(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_018a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_019(self): """ Extract document containing only an invalid collect section, validate=True. """ before = Config() before.collect = CollectConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_020(self): """ Extract document containing only an invalid collect section, validate=False. """ before = Config() before.collect = CollectConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_021(self): """ Extract document containing only a valid stage section, one empty list, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_022(self): """ Extract document containing only a valid stage section, empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_023(self): """ Extract document containing only a valid stage section, non-empty lists, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_024(self): """ Extract document containing only a valid stage section, non-empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_025(self): """ Extract document containing only an invalid stage section, validate=True. """ before = Config() before.stage = StageConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_026(self): """ Extract document containing only an invalid stage section, validate=False. """ before = Config() before.stage = StageConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_027(self): """ Extract document containing only a valid store section, validate=True. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_028(self): """ Extract document containing only a valid store section, validate=False. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_029(self): """ Extract document containing only an invalid store section, validate=True. """ before = Config() before.store = StoreConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_030(self): """ Extract document containing only an invalid store section, validate=False. """ before = Config() before.store = StoreConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_031(self): """ Extract document containing only a valid purge section, empty list, validate=True. """ before = Config() before.purge = PurgeConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_032(self): """ Extract document containing only a valid purge section, empty list, validate=False. """ before = Config() before.purge = PurgeConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_033(self): """ Extract document containing only a valid purge section, non-empty list, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_034(self): """ Extract document containing only a valid purge section, non-empty list, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_035(self): """ Extract document containing only an invalid purge section, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_036(self): """ Extract document containing only an invalid purge section, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_037(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_037a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_038(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_038a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_039(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_040(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_041(self): """ Extract complete document containing all required and optional fields, using a peers configuration section, validate=True. """ path = self.resources["cback.conf.21"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestByteQuantity, 'test'), unittest.makeSuite(TestActionDependencies, 'test'), unittest.makeSuite(TestActionHook, 'test'), unittest.makeSuite(TestPreActionHook, 'test'), unittest.makeSuite(TestPostActionHook, 'test'), unittest.makeSuite(TestBlankBehavior, 'test'), unittest.makeSuite(TestExtendedAction, 'test'), unittest.makeSuite(TestCommandOverride, 'test'), unittest.makeSuite(TestCollectFile, 'test'), unittest.makeSuite(TestCollectDir, 'test'), unittest.makeSuite(TestPurgeDir, 'test'), unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'test'), unittest.makeSuite(TestReferenceConfig, 'test'), unittest.makeSuite(TestExtensionsConfig, 'test'), unittest.makeSuite(TestOptionsConfig, 'test'), unittest.makeSuite(TestPeersConfig, 'test'), unittest.makeSuite(TestCollectConfig, 'test'), unittest.makeSuite(TestStageConfig, 'test'), unittest.makeSuite(TestStoreConfig, 'test'), unittest.makeSuite(TestPurgeConfig, 'test'), unittest.makeSuite(TestConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/dvdwritertests.py0000664000175000017500000013644111415165677023055 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: dvdwritertests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests DVD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/dvdwriter.py. Code Coverage ============= This module contains individual tests for the public classes implemented in dvdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause DVD media to be written to. As a compromise, complicated parts of the implementation are in terms of private static methods with well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing these few private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup2.writers.dvdwriter import MediaDefinition, MediaCapacity, DvdWriter from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar ####################################################################### # Module-wide configuration and constants ####################################################################### GB44 = (4.4*1024.0*1024.0*1024.0) # 4.4 GB GB44SECTORS = GB44/2048.0 # 4.4 GB in 2048-byte sectors DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.failUnlessRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_DVDPLUSR} media type. """ media = MediaDefinition(MEDIA_DVDPLUSR) self.failUnlessEqual(MEDIA_DVDPLUSR, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failUnlessEqual(GB44SECTORS, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_DVDPLUSRW} media type. """ media = MediaDefinition(MEDIA_DVDPLUSRW) self.failUnlessEqual(MEDIA_DVDPLUSRW, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failUnlessEqual(GB44SECTORS, media.capacity) ########################## # TestMediaCapacity class ########################## class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor with valid, zero values """ capacity = MediaCapacity(0.0, 0.0) self.failUnlessEqual(0.0, capacity.bytesUsed) self.failUnlessEqual(0.0, capacity.bytesAvailable) def testConstructor_002(self): """ Test the constructor with valid, non-zero values. """ capacity = MediaCapacity(1.1, 2.2) self.failUnlessEqual(1.1, capacity.bytesUsed) self.failUnlessEqual(2.2, capacity.bytesAvailable) def testConstructor_003(self): """ Test the constructor with bytesUsed that is not a float. """ self.failUnlessRaises(ValueError, MediaCapacity, 0.0, "ken") def testConstructor_004(self): """ Test the constructor with bytesAvailable that is not a float. """ self.failUnlessRaises(ValueError, MediaCapacity, "a", 0.0) ###################### # TestDvdWriter class ###################### class TestDvdWriter(unittest.TestCase): """Tests for the DvdWriter class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileContents(self, resource): """Gets contents of named resource as a list of strings.""" path = self.resources[resource] return open(path).readlines() ################### # Test constructor ################### def testConstructor_001(self): """ Test with an empty device. """ self.failUnlessRaises(ValueError, DvdWriter, None) def testConstructor_002(self): """ Test with a device only. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_003(self): """ Test with a device and valid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="ATA:1,0,0", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_004(self): """ Test with a device and valid drive speed. """ dvdwriter = DvdWriter("/dev/dvd", driveSpeed=3, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(3, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_005(self): """ Test with a device with media type MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_006(self): """ Test with a device with media type MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_007(self): """ Test with a device and invalid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="00000000", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("00000000", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_008(self): """ Test with a device and invalid drive speed. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/dvd", driveSpeed="KEN", unittest=True) def testConstructor_009(self): """ Test with a device and invalid media type. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/dvd", mediaType=999, unittest=True) def testConstructor_010(self): """ Test with all valid parameters, but no device, unittest=True. """ self.failUnlessRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_011(self): """ Test with all valid parameters, but no device, unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_012(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=True. """ self.failUnlessRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_013(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_014(self): """ Test with all valid parameters, and an invalid device (path does not exist), unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/bogus", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_015(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=False, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(1, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_016(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=True, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(1, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(False, dvdwriter.deviceHasTray) self.failUnlessEqual(False, dvdwriter.deviceCanEject) ###################### # Test isRewritable() ###################### def testIsRewritable_001(self): """ Test with MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual(False, dvdwriter.isRewritable()) def testIsRewritable_002(self): """ Test with MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSRW, unittest=True) self.failUnlessEqual(True, dvdwriter.isRewritable()) ######################### # Test initializeImage() ######################### def testInitializeImage_001(self): """ Test with newDisc=False, tmpdir=None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessEqual(False, dvdwriter._image.newDisc) self.failUnlessEqual(None, dvdwriter._image.tmpdir) self.failUnlessEqual({}, dvdwriter._image.entries) def testInitializeImage_002(self): """ Test with newDisc=True, tmpdir not None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(True, "/path/to/somewhere") self.failUnlessEqual(True, dvdwriter._image.newDisc) self.failUnlessEqual("/path/to/somewhere", dvdwriter._image.tmpdir) self.failUnlessEqual({}, dvdwriter._image.entries) ####################### # Test addImageEntry() ####################### def testAddImageEntry_001(self): """ Add a valid path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_002(self): """ Add a valid path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_003(self): """ Add a non-existent path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_004(self): """ Add a non-existent path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_005(self): """ Add a valid path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, None) self.failUnlessEqual({ path:None, }, dvdwriter._image.entries) def testAddImageEntry_006(self): """ Add a valid path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_007(self): """ Add a non-existent path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_008(self): """ Add a non-existent path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_009(self): """ Add the same path several times. """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_010(self): """ Add several paths. """ self.extractTar("tree9") path1 = self.buildPath([ "tree9", "dir001", ]) path2 = self.buildPath([ "tree9", "dir002", ]) path3 = self.buildPath([ "tree9", "dir001", "dir001", ]) self.failUnless(os.path.exists(path1)) self.failUnless(os.path.exists(path2)) self.failUnless(os.path.exists(path3)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path1, None) self.failUnlessEqual({ path1:None, }, dvdwriter._image.entries) dvdwriter.addImageEntry(path2, "ken") self.failUnlessEqual({ path1:None, path2:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path3, "another") self.failUnlessEqual({ path1:None, path2:"ken", path3:"another", }, dvdwriter._image.entries) ############################ # Test _searchForOverburn() ############################ def testSearchForOverburn_001(self): """ Test with output=None. """ output = None DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_002(self): """ Test with output=[]. """ output = [] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: blocks are free, to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-) /dev/cdrom: 89048 blocks are free, 2033746 to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: 894048blocks are free, 2033746to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ ":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: XXXX blocks are free, XXXX to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 1 blocks are free, 1 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/dvd: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/writer: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( bogus: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_005(self): """ Test with multi-line output, not containing the pattern. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") DvdWriter._searchForOverburn(output) # no exception should be thrown") def testSearchForOverburn_006(self): """ Test with multi-line output, containing the pattern at the top. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_007(self): """ Test with multi-line output, containing the pattern at the bottom. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_008(self): """ Test with multi-line output, containing the pattern in the middle. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_009(self): """ Test with multi-line output, containing the pattern several times. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Rock Ridge signatures found") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) ########################### # Test _parseSectorsUsed() ########################### def testParseSectorsUsed_001(self): """ Test with output=None. """ output = None sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_002(self): """ Test with output=[]. """ output = [] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ "'seek=10'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(10.0*16.0, sectorsUsed) output = [ "' seek= 10 '", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(10.0*16.0, sectorsUsed) output = [ "Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(87566*16.0, sectorsUsed) def testParseSectorsUsed_005(self): """ Test with real growisofs output. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(87566*16.0, sectorsUsed) ######################### # Test _buildWriteArgs() ######################### def testBuildWriteArgs_001(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None,dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = False self.failUnlessRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_002(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = True self.failUnlessRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_003(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_004(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_005(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_006(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_007(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_008(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_009(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_010(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_011(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_012(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_013(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_014(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_015(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_016(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_017(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_018(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_019(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel="BACKUP", dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = "BACKUP" dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_020(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel="BACKUP", dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = "BACKUP" dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-V", "BACKUP", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMediaDefinition, 'test'), unittest.makeSuite(TestMediaCapacity, 'test'), unittest.makeSuite(TestDvdWriter, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/capacitytests.py0000664000175000017500000010015511415165677022631 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: capacitytests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests capacity extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/capacity.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/capacity.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CAPACITYTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import hexFloatLiteralAllowed, findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.capacity import LocalConfig, CapacityConfig, ByteQuantity, PercentageQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "capacity.conf.1", "capacity.conf.2", "capacity.conf.3", "capacity.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ############################### # TestPercentageQuantity class ############################### class TestPercentageQuantity(unittest.TestCase): """Tests for the PercentageQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PercentageQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ quantity = PercentageQuantity("6") self.failUnlessEqual("6", quantity.quantity) self.failUnlessEqual(6.0, quantity.percentage) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = PercentageQuantity(quantity="1.0") self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.percentage) quantity.quantity = None self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) def testConstructor_004(self): """ Test assignment of quantity attribute, valid values. """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) quantity.quantity = "1.0" self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.percentage) quantity.quantity = ".1" self.failUnlessEqual(".1", quantity.quantity) self.failUnlessEqual(0.1, quantity.percentage) quantity.quantity = "12" self.failUnlessEqual("12", quantity.quantity) self.failUnlessEqual(12.0, quantity.percentage) quantity.quantity = "0.5" self.failUnlessEqual("0.5", quantity.quantity) self.failUnlessEqual(0.5, quantity.percentage) quantity.quantity = "0.25E2" self.failUnlessEqual("0.25E2", quantity.quantity) self.failUnlessEqual(0.25e2, quantity.percentage) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't quantity.quantity = "0x0C" self.failUnlessEqual("0x0C", quantity.quantity) self.failUnlessEqual(12.0, quantity.percentage) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.failUnlessEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not a floating point number). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.failUnlessEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.failUnlessEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of quantity attribute, invalid value (larger than 100%). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "100.0001") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "101") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "1e6") self.failUnlessEqual(None, quantity.quantity) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity() self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = PercentageQuantity("12") quantity2 = PercentageQuantity("12") self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity(quantity="12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004(self): """ Test comparison of two differing objects, quantity differs. """ quantity1 = PercentageQuantity("10") quantity2 = PercentageQuantity("12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) ########################## # TestCapacityConfig class ########################## class TestCapacityConfig(unittest.TestCase): """Tests for the CapacityConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CapacityConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessEqual(None, capacity.minBytes) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("2.0", UNIT_KBYTES)) self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("2.0", UNIT_KBYTES), capacity.minBytes) def testConstructor_003(self): """ Test assignment of maxPercentage attribute, None value. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) capacity.maxPercentage = None self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_004(self): """ Test assignment of maxPercentage attribute, valid value. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) capacity.maxPercentage = PercentageQuantity("63.2") self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) def testConstructor_005(self): """ Test assignment of maxPercentage attribute, invalid value (empty). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "") self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_006(self): """ Test assignment of maxPercentage attribute, invalid value (not a PercentageQuantity). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "1.0 GB") self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_007(self): """ Test assignment of minBytes attribute, None value. """ capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) capacity.minBytes = None self.failUnlessEqual(None, capacity.minBytes) def testConstructor_008(self): """ Test assignment of minBytes attribute, valid value. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) capacity.minBytes = ByteQuantity("1.00", UNIT_KBYTES) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) def testConstructor_009(self): """ Test assignment of minBytes attribute, invalid value (empty). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", "") self.failUnlessEqual(None, capacity.minBytes) def testConstructor_010(self): """ Test assignment of minBytes attribute, invalid value (not a ByteQuantity). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", 12) self.failUnlessEqual(None, capacity.minBytes) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ capacity1 = CapacityConfig() capacity2 = CapacityConfig() self.failUnlessEqual(capacity1, capacity2) self.failUnless(capacity1 == capacity2) self.failUnless(not capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(capacity1 >= capacity2) self.failUnless(not capacity1 != capacity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessEqual(capacity1, capacity2) self.failUnless(capacity1 == capacity2) self.failUnless(not capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(capacity1 >= capacity2) self.failUnless(not capacity1 != capacity2) def testComparison_003(self): """ Test comparison of two differing objects, maxPercentage differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_004(self): """ Test comparison of two differing objects, maxPercentage differs. """ capacity1 = CapacityConfig(PercentageQuantity("15.0"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_005(self): """ Test comparison of two differing objects, minBytes differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_006(self): """ Test comparison of two differing objects, minBytes differs. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("0.5", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the capacity configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.capacity) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.capacity) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["capacity.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of capacity attribute, None value. """ config = LocalConfig() config.capacity = None self.failUnlessEqual(None, config.capacity) def testConstructor_005(self): """ Test assignment of capacity attribute, valid value. """ config = LocalConfig() config.capacity = CapacityConfig() self.failUnlessEqual(CapacityConfig(), config.capacity) def testConstructor_006(self): """ Test assignment of capacity attribute, invalid value (not CapacityConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "capacity", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.capacity = CapacityConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, capacity differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, capacity differs. """ config1 = LocalConfig() config1.capacity = CapacityConfig(minBytes=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None capacity section. """ config = LocalConfig() config.capacity = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty capacity section. """ config = LocalConfig() config.capacity = CapacityConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty capacity section with no values filled in. """ config = LocalConfig() config.capacity = CapacityConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty capacity section with both max percentage and min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty capacity section with only max percentage filled in. """ config = LocalConfig() config.capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) config.validate() def testValidate_006(self): """ Test validate on a non-empty capacity section with only min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ # Some of the byte-size parsing logic is tested more fully in splittests.py. # I decided not to duplicate it here, since it's shared from config.py. def testParse_001(self): """ Parse empty config document. """ path = self.resources["capacity.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.capacity) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.capacity) def testParse_002(self): """ Parse config document that configures max percentage. """ path = self.resources["capacity.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.failUnlessEqual(None, config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.failUnlessEqual(None, config.capacity.minBytes) def testParse_003(self): """ Parse config document that configures min bytes, size in bytes. """ path = self.resources["capacity.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) def testParse_004(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["capacity.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ capacity = CapacityConfig() config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_002(self): """ Test with max percentage value set. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.29128310980123")) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_003(self): """ Test with min bytes value set, byte values. """ capacity = CapacityConfig(minBytes=ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_004(self): """ Test with min bytes value set, KB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_005(self): """ Test with min bytes value set, MB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_006(self): """ Test with min bytes value set, GB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestPercentageQuantity, 'test'), unittest.makeSuite(TestCapacityConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/actionsutiltests.py0000664000175000017500000002345511415165677023401 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: actionsutiltests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests action utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/actions/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in actions/util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a ACTIONSUTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile from CedarBackup2.extend.encrypt import ENCRYPT_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ####################### # Test findDailyDirs() ####################### def testFindDailyDirs_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.failUnlessRaises(ValueError, findDailyDirs, stagingDir, ENCRYPT_INDICATOR) def testFindDailyDirs_002(self): """ Test with an empty staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_003(self): """ Test with a staging directory containing only files. """ self.extractTar("tree1") stagingDir = self.buildPath(["tree1", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_004(self): """ Test with a staging directory containing only links. """ self.extractTar("tree15") stagingDir = self.buildPath(["tree15", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_005(self): """ Test with a valid staging directory, where the daily directories do NOT contain the encrypt indicator. """ self.extractTar("tree17") stagingDir = self.buildPath(["tree17" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(6, len(dailyDirs)) self.failUnless(self.buildPath([ "tree17", "2006", "12", "29", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2006", "12", "31", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "02", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_006(self): """ Test with a valid staging directory, where the daily directories DO contain the encrypt indicator. """ self.extractTar("tree18") stagingDir = self.buildPath(["tree18" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_007(self): """ Test with a valid staging directory, where some daily directories contain the encrypt indicator and others do not. """ self.extractTar("tree19") stagingDir = self.buildPath(["tree19" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(3, len(dailyDirs)) self.failUnless(self.buildPath([ "tree19", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree19", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree19", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_008(self): """ Test for case where directories other than daily directories contain the encrypt indicator (the indicator should be ignored). """ self.extractTar("tree20") stagingDir = self.buildPath(["tree20", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(6, len(dailyDirs)) self.failUnless(self.buildPath([ "tree20", "2006", "12", "29", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2006", "12", "31", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "02", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "03", ]) in dailyDirs) ############################ # Test writeIndicatorFile() ############################ def testWriteIndicatorFile_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.failUnlessRaises(IOError, writeIndicatorFile, stagingDir, ENCRYPT_INDICATOR, None, None) def testWriteIndicatorFile_002(self): """ Test with a valid staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) writeIndicatorFile(stagingDir, ENCRYPT_INDICATOR, None, None) self.failUnless(os.path.exists(self.buildPath(["tree8", "dir001", ENCRYPT_INDICATOR, ]))) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/__init__.py0000664000175000017500000000155511415155732021503 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Provides package initialization. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Package initialization ######################################################################## """ This causes the test directory to be a package. """ __all__ = [ ] CedarBackup2-2.22.0/testcase/clitests.py0000664000175000017500000234104511415165677021612 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: clitests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests command-line interface functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/cli.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in cli.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CLITESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from os.path import isdir, isfile, islink, isabs, exists from getopt import GetoptError from CedarBackup2.testutil import failUnlessAssignRaises, captureOutput from CedarBackup2.config import OptionsConfig, PeersConfig, ExtensionsConfig from CedarBackup2.config import LocalPeer, RemotePeer from CedarBackup2.config import ExtendedAction, ActionDependencies, PreActionHook, PostActionHook from CedarBackup2.cli import _usage, _version, _diagnostics from CedarBackup2.cli import Options from CedarBackup2.cli import _ActionSet from CedarBackup2.action import executeCollect, executeStage, executeStore, executePurge, executeRebuild, executeValidate ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) def testSimpleFuncs_003(self): """ Test that the _diagnostics() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_diagnostics) #################### # TestOptions class #################### class TestOptions(unittest.TestCase): """Tests for the Options class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no arguments. """ options = Options() self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_002(self): """ Test constructor with validate=False, no other arguments. """ options = Options(validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_003(self): """ Test constructor with argumentList=[], validate=False. """ options = Options(argumentList=[], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_004(self): """ Test constructor with argumentString="", validate=False. """ options = Options(argumentString="", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_005(self): """ Test constructor with argumentList=["--help", ], validate=False. """ options = Options(argumentList=["--help", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_006(self): """ Test constructor with argumentString="--help", validate=False. """ options = Options(argumentString="--help", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_007(self): """ Test constructor with argumentList=["-h", ], validate=False. """ options = Options(argumentList=["-h", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_008(self): """ Test constructor with argumentString="-h", validate=False. """ options = Options(argumentString="-h", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_009(self): """ Test constructor with argumentList=["--version", ], validate=False. """ options = Options(argumentList=["--version", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_010(self): """ Test constructor with argumentString="--version", validate=False. """ options = Options(argumentString="--version", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_011(self): """ Test constructor with argumentList=["-V", ], validate=False. """ options = Options(argumentList=["-V", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_012(self): """ Test constructor with argumentString="-V", validate=False. """ options = Options(argumentString="-V", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_013(self): """ Test constructor with argumentList=["--verbose", ], validate=False. """ options = Options(argumentList=["--verbose", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_014(self): """ Test constructor with argumentString="--verbose", validate=False. """ options = Options(argumentString="--verbose", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_015(self): """ Test constructor with argumentList=["-b", ], validate=False. """ options = Options(argumentList=["-b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_016(self): """ Test constructor with argumentString="-b", validate=False. """ options = Options(argumentString="-b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_017(self): """ Test constructor with argumentList=["--quiet", ], validate=False. """ options = Options(argumentList=["--quiet", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_018(self): """ Test constructor with argumentString="--quiet", validate=False. """ options = Options(argumentString="--quiet", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_019(self): """ Test constructor with argumentList=["-q", ], validate=False. """ options = Options(argumentList=["-q", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_020(self): """ Test constructor with argumentString="-q", validate=False. """ options = Options(argumentString="-q", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_021(self): """ Test constructor with argumentList=["--config", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--config", ], validate=False) def testConstructor_022(self): """ Test constructor with argumentString="--config", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--config", validate=False) def testConstructor_023(self): """ Test constructor with argumentList=["-c", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-c", ], validate=False) def testConstructor_024(self): """ Test constructor with argumentString="-c", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-c", validate=False) def testConstructor_025(self): """ Test constructor with argumentList=["--config", "something", ], validate=False. """ options = Options(argumentList=["--config", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_026(self): """ Test constructor with argumentString="--config something", validate=False. """ options = Options(argumentString="--config something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_027(self): """ Test constructor with argumentList=["-c", "something", ], validate=False. """ options = Options(argumentList=["-c", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_028(self): """ Test constructor with argumentString="-c something", validate=False. """ options = Options(argumentString="-c something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_029(self): """ Test constructor with argumentList=["--full", ], validate=False. """ options = Options(argumentList=["--full", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_030(self): """ Test constructor with argumentString="--full", validate=False. """ options = Options(argumentString="--full", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_031(self): """ Test constructor with argumentList=["-f", ], validate=False. """ options = Options(argumentList=["-f", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_032(self): """ Test constructor with argumentString="-f", validate=False. """ options = Options(argumentString="-f", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_033(self): """ Test constructor with argumentList=["--logfile", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=False) def testConstructor_034(self): """ Test constructor with argumentString="--logfile", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=False) def testConstructor_035(self): """ Test constructor with argumentList=["-l", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=False) def testConstructor_036(self): """ Test constructor with argumentString="-l", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=False) def testConstructor_037(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=False. """ options = Options(argumentList=["--logfile", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_038(self): """ Test constructor with argumentString="--logfile something", validate=False. """ options = Options(argumentString="--logfile something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_039(self): """ Test constructor with argumentList=["-l", "something", ], validate=False. """ options = Options(argumentList=["-l", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_040(self): """ Test constructor with argumentString="-l something", validate=False. """ options = Options(argumentString="-l something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_041(self): """ Test constructor with argumentList=["--owner", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=False) def testConstructor_042(self): """ Test constructor with argumentString="--owner", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=False) def testConstructor_043(self): """ Test constructor with argumentList=["-o", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=False) def testConstructor_044(self): """ Test constructor with argumentString="-o", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=False) def testConstructor_045(self): """ Test constructor with argumentList=["--owner", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=False) def testConstructor_046(self): """ Test constructor with argumentString="--owner something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=False) def testConstructor_047(self): """ Test constructor with argumentList=["-o", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=False) def testConstructor_048(self): """ Test constructor with argumentString="-o something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=False) def testConstructor_049(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=False. """ options = Options(argumentList=["--owner", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_050(self): """ Test constructor with argumentString="--owner a:b", validate=False. """ options = Options(argumentString="--owner a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_051(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=False. """ options = Options(argumentList=["-o", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_052(self): """ Test constructor with argumentString="-o a:b", validate=False. """ options = Options(argumentString="-o a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_053(self): """ Test constructor with argumentList=["--mode", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=False) def testConstructor_054(self): """ Test constructor with argumentString="--mode", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=False) def testConstructor_055(self): """ Test constructor with argumentList=["-m", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=False) def testConstructor_056(self): """ Test constructor with argumentString="-m", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=False) def testConstructor_057(self): """ Test constructor with argumentList=["--mode", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=False) def testConstructor_058(self): """ Test constructor with argumentString="--mode something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=False) def testConstructor_059(self): """ Test constructor with argumentList=["-m", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=False) def testConstructor_060(self): """ Test constructor with argumentString="-m something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=False) def testConstructor_061(self): """ Test constructor with argumentList=["--mode", "631", ], validate=False. """ options = Options(argumentList=["--mode", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_062(self): """ Test constructor with argumentString="--mode 631", validate=False. """ options = Options(argumentString="--mode 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_063(self): """ Test constructor with argumentList=["-m", "631", ], validate=False. """ options = Options(argumentList=["-m", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_064(self): """ Test constructor with argumentString="-m 631", validate=False. """ options = Options(argumentString="-m 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_065(self): """ Test constructor with argumentList=["--output", ], validate=False. """ options = Options(argumentList=["--output", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_066(self): """ Test constructor with argumentString="--output", validate=False. """ options = Options(argumentString="--output", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_067(self): """ Test constructor with argumentList=["-O", ], validate=False. """ options = Options(argumentList=["-O", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_068(self): """ Test constructor with argumentString="-O", validate=False. """ options = Options(argumentString="-O", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_069(self): """ Test constructor with argumentList=["--debug", ], validate=False. """ options = Options(argumentList=["--debug", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_070(self): """ Test constructor with argumentString="--debug", validate=False. """ options = Options(argumentString="--debug", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_071(self): """ Test constructor with argumentList=["-d", ], validate=False. """ options = Options(argumentList=["-d", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_072(self): """ Test constructor with argumentString="-d", validate=False. """ options = Options(argumentString="-d", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_073(self): """ Test constructor with argumentList=["--stack", ], validate=False. """ options = Options(argumentList=["--stack", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_074(self): """ Test constructor with argumentString="--stack", validate=False. """ options = Options(argumentString="--stack", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_075(self): """ Test constructor with argumentList=["-s", ], validate=False. """ options = Options(argumentList=["-s", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual([], options.actions) def testConstructor_076(self): """ Test constructor with argumentString="-s", validate=False. """ options = Options(argumentString="-s", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_077(self): """ Test constructor with argumentList=["all", ], validate=False. """ options = Options(argumentList=["all", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_078(self): """ Test constructor with argumentString="all", validate=False. """ options = Options(argumentString="all", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_079(self): """ Test constructor with argumentList=["collect", ], validate=False. """ options = Options(argumentList=["collect", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_080(self): """ Test constructor with argumentString="collect", validate=False. """ options = Options(argumentString="collect", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_081(self): """ Test constructor with argumentList=["stage", ], validate=False. """ options = Options(argumentList=["stage", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_082(self): """ Test constructor with argumentString="stage", validate=False. """ options = Options(argumentString="stage", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_083(self): """ Test constructor with argumentList=["store", ], validate=False. """ options = Options(argumentList=["store", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_084(self): """ Test constructor with argumentString="store", validate=False. """ options = Options(argumentString="store", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_085(self): """ Test constructor with argumentList=["purge", ], validate=False. """ options = Options(argumentList=["purge", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_086(self): """ Test constructor with argumentString="purge", validate=False. """ options = Options(argumentString="purge", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_087(self): """ Test constructor with argumentList=["rebuild", ], validate=False. """ options = Options(argumentList=["rebuild", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_088(self): """ Test constructor with argumentString="rebuild", validate=False. """ options = Options(argumentString="rebuild", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_089(self): """ Test constructor with argumentList=["validate", ], validate=False. """ options = Options(argumentList=["validate", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_090(self): """ Test constructor with argumentString="validate", validate=False. """ options = Options(argumentString="validate", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_091(self): """ Test constructor with argumentList=["collect", "all", ], validate=False. """ options = Options(argumentList=["collect", "all", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "all", ], options.actions) def testConstructor_092(self): """ Test constructor with argumentString="collect all", validate=False. """ options = Options(argumentString="collect all", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "all", ], options.actions) def testConstructor_093(self): """ Test constructor with argumentList=["collect", "rebuild", ], validate=False. """ options = Options(argumentList=["collect", "rebuild", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "rebuild", ], options.actions) def testConstructor_094(self): """ Test constructor with argumentString="collect rebuild", validate=False. """ options = Options(argumentString="collect rebuild", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "rebuild", ], options.actions) def testConstructor_095(self): """ Test constructor with argumentList=["collect", "validate", ], validate=False. """ options = Options(argumentList=["collect", "validate", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "validate", ], options.actions) def testConstructor_096(self): """ Test constructor with argumentString="collect validate", validate=False. """ options = Options(argumentString="collect validate", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "validate", ], options.actions) def testConstructor_097(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_098(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=False. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_099(self): """ Test constructor with argumentList=[], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=[], validate=True) def testConstructor_100(self): """ Test constructor with argumentString="", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="", validate=True) def testConstructor_101(self): """ Test constructor with argumentList=["--help", ], validate=True. """ options = Options(argumentList=["--help", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_102(self): """ Test constructor with argumentString="--help", validate=True. """ options = Options(argumentString="--help", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_103(self): """ Test constructor with argumentList=["-h", ], validate=True. """ options = Options(argumentList=["-h", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_104(self): """ Test constructor with argumentString="-h", validate=True. """ options = Options(argumentString="-h", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_105(self): """ Test constructor with argumentList=["--version", ], validate=True. """ options = Options(argumentList=["--version", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_106(self): """ Test constructor with argumentString="--version", validate=True. """ options = Options(argumentString="--version", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_107(self): """ Test constructor with argumentList=["-V", ], validate=True. """ options = Options(argumentList=["-V", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_108(self): """ Test constructor with argumentString="-V", validate=True. """ options = Options(argumentString="-V", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_109(self): """ Test constructor with argumentList=["--verbose", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--verbose", ], validate=True) def testConstructor_110(self): """ Test constructor with argumentString="--verbose", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--verbose", validate=True) def testConstructor_111(self): """ Test constructor with argumentList=["-b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-b", ], validate=True) def testConstructor_112(self): """ Test constructor with argumentString="-b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-b", validate=True) def testConstructor_113(self): """ Test constructor with argumentList=["--quiet", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--quiet", ], validate=True) def testConstructor_114(self): """ Test constructor with argumentString="--quiet", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--quiet", validate=True) def testConstructor_115(self): """ Test constructor with argumentList=["-q", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-q", ], validate=True) def testConstructor_116(self): """ Test constructor with argumentString="-q", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-q", validate=True) def testConstructor_117(self): """ Test constructor with argumentList=["--config", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--config", ], validate=True) def testConstructor_118(self): """ Test constructor with argumentString="--config", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--config", validate=True) def testConstructor_119(self): """ Test constructor with argumentList=["-c", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-c", ], validate=True) def testConstructor_120(self): """ Test constructor with argumentString="-c", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-c", validate=True) def testConstructor_121(self): """ Test constructor with argumentList=["--config", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--config", "something", ], validate=True) def testConstructor_122(self): """ Test constructor with argumentString="--config something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--config something", validate=True) def testConstructor_123(self): """ Test constructor with argumentList=["-c", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-c", "something", ], validate=True) def testConstructor_124(self): """ Test constructor with argumentString="-c something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-c something", validate=True) def testConstructor_125(self): """ Test constructor with argumentList=["--full", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--full", ], validate=True) def testConstructor_126(self): """ Test constructor with argumentString="--full", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--full", validate=True) def testConstructor_127(self): """ Test constructor with argumentList=["-f", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-f", ], validate=True) def testConstructor_128(self): """ Test constructor with argumentString="-f", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-f", validate=True) def testConstructor_129(self): """ Test constructor with argumentList=["--logfile", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=True) def testConstructor_130(self): """ Test constructor with argumentString="--logfile", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=True) def testConstructor_131(self): """ Test constructor with argumentList=["-l", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=True) def testConstructor_132(self): """ Test constructor with argumentString="-l", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=True) def testConstructor_133(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--logfile", "something", ], validate=True) def testConstructor_134(self): """ Test constructor with argumentString="--logfile something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--logfile something", validate=True) def testConstructor_135(self): """ Test constructor with argumentList=["-l", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-l", "something", ], validate=True) def testConstructor_136(self): """ Test constructor with argumentString="-l something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-l something", validate=True) def testConstructor_137(self): """ Test constructor with argumentList=["--owner", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=True) def testConstructor_138(self): """ Test constructor with argumentString="--owner", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=True) def testConstructor_139(self): """ Test constructor with argumentList=["-o", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=True) def testConstructor_140(self): """ Test constructor with argumentString="-o", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=True) def testConstructor_141(self): """ Test constructor with argumentList=["--owner", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=True) def testConstructor_142(self): """ Test constructor with argumentString="--owner something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=True) def testConstructor_143(self): """ Test constructor with argumentList=["-o", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=True) def testConstructor_144(self): """ Test constructor with argumentString="-o something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=True) def testConstructor_145(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "a:b", ], validate=True) def testConstructor_146(self): """ Test constructor with argumentString="--owner a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner a:b", validate=True) def testConstructor_147(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "a:b", ], validate=True) def testConstructor_148(self): """ Test constructor with argumentString="-o a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o a:b", validate=True) def testConstructor_149(self): """ Test constructor with argumentList=["--mode", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=True) def testConstructor_150(self): """ Test constructor with argumentString="--mode", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=True) def testConstructor_151(self): """ Test constructor with argumentList=["-m", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=True) def testConstructor_152(self): """ Test constructor with argumentString="-m", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=True) def testConstructor_153(self): """ Test constructor with argumentList=["--mode", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=True) def testConstructor_154(self): """ Test constructor with argumentString="--mode something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=True) def testConstructor_155(self): """ Test constructor with argumentList=["-m", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=True) def testConstructor_156(self): """ Test constructor with argumentString="-m something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=True) def testConstructor_157(self): """ Test constructor with argumentList=["--mode", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "631", ], validate=True) def testConstructor_158(self): """ Test constructor with argumentString="--mode 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode 631", validate=True) def testConstructor_159(self): """ Test constructor with argumentList=["-m", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "631", ], validate=True) def testConstructor_160(self): """ Test constructor with argumentString="-m 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m 631", validate=True) def testConstructor_161(self): """ Test constructor with argumentList=["--output", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--output", ], validate=True) def testConstructor_162(self): """ Test constructor with argumentString="--output", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--output", validate=True) def testConstructor_163(self): """ Test constructor with argumentList=["-O", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-O", ], validate=True) def testConstructor_164(self): """ Test constructor with argumentString="-O", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-O", validate=True) def testConstructor_165(self): """ Test constructor with argumentList=["--debug", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--debug", ], validate=True) def testConstructor_166(self): """ Test constructor with argumentString="--debug", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--debug", validate=True) def testConstructor_167(self): """ Test constructor with argumentList=["-d", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-d", ], validate=True) def testConstructor_168(self): """ Test constructor with argumentString="-d", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-d", validate=True) def testConstructor_169(self): """ Test constructor with argumentList=["--stack", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--stack", ], validate=True) def testConstructor_170(self): """ Test constructor with argumentString="--stack", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--stack", validate=True) def testConstructor_171(self): """ Test constructor with argumentList=["-s", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-s", ], validate=True) def testConstructor_172(self): """ Test constructor with argumentString="-s", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-s", validate=True) def testConstructor_173(self): """ Test constructor with argumentList=["all", ], validate=True. """ options = Options(argumentList=["all", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_174(self): """ Test constructor with argumentString="all", validate=True. """ options = Options(argumentString="all", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_175(self): """ Test constructor with argumentList=["collect", ], validate=True. """ options = Options(argumentList=["collect", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_176(self): """ Test constructor with argumentString="collect", validate=True. """ options = Options(argumentString="collect", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_177(self): """ Test constructor with argumentList=["stage", ], validate=True. """ options = Options(argumentList=["stage", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_178(self): """ Test constructor with argumentString="stage", validate=True. """ options = Options(argumentString="stage", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_179(self): """ Test constructor with argumentList=["store", ], validate=True. """ options = Options(argumentList=["store", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_180(self): """ Test constructor with argumentString="store", validate=True. """ options = Options(argumentString="store", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_181(self): """ Test constructor with argumentList=["purge", ], validate=True. """ options = Options(argumentList=["purge", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_182(self): """ Test constructor with argumentString="purge", validate=True. """ options = Options(argumentString="purge", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_183(self): """ Test constructor with argumentList=["rebuild", ], validate=True. """ options = Options(argumentList=["rebuild", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_184(self): """ Test constructor with argumentString="rebuild", validate=True. """ options = Options(argumentString="rebuild", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_185(self): """ Test constructor with argumentList=["validate", ], validate=True. """ options = Options(argumentList=["validate", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_186(self): """ Test constructor with argumentString="validate", validate=True. """ options = Options(argumentString="validate", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_187(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_188(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=True. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_189(self): """ Test constructor with argumentList=["--managed", ], validate=False. """ options = Options(argumentList=["--managed", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_190(self): """ Test constructor with argumentString="--managed", validate=False. """ options = Options(argumentString="--managed", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_191(self): """ Test constructor with argumentList=["-M", ], validate=False. """ options = Options(argumentList=["-M", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_192(self): """ Test constructor with argumentString="-M", validate=False. """ options = Options(argumentString="-M", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_193(self): """ Test constructor with argumentList=["--managed-only", ], validate=False. """ options = Options(argumentList=["--managed-only", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_194(self): """ Test constructor with argumentString="--managed-only", validate=False. """ options = Options(argumentString="--managed-only", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_195(self): """ Test constructor with argumentList=["-N", ], validate=False. """ options = Options(argumentList=["-N", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_196(self): """ Test constructor with argumentString="-N", validate=False. """ options = Options(argumentString="-N", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_197(self): """ Test constructor with argumentList=["--managed", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--managed", ], validate=True) def testConstructor_198(self): """ Test constructor with argumentString="--managed", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--managed", validate=True) def testConstructor_199(self): """ Test constructor with argumentList=["-M", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-M", ], validate=True) def testConstructor_200(self): """ Test constructor with argumentString="-M", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-M", validate=True) def testConstructor_201(self): """ Test constructor with argumentList=["--managed-only", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--managed-only", ], validate=True) def testConstructor_202(self): """ Test constructor with argumentString="--managed-only", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--managed-only", validate=True) def testConstructor_203(self): """ Test constructor with argumentList=["-N", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-N", ], validate=True) def testConstructor_204(self): """ Test constructor with argumentString="-N", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-N", validate=True) def testConstructor_205(self): """ Test constructor with argumentList=["--diagnostics", ], validate=False. """ options = Options(argumentList=["--diagnostics", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_206(self): """ Test constructor with argumentString="--diagnostics", validate=False. """ options = Options(argumentString="--diagnostics", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_207(self): """ Test constructor with argumentList=["-D", ], validate=False. """ options = Options(argumentList=["-D", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_208(self): """ Test constructor with argumentString="-D", validate=False. """ options = Options(argumentString="-D", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_209(self): """ Test constructor with argumentList=["--diagnostics", ], validate=True. """ options = Options(argumentList=["--diagnostics", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_210(self): """ Test constructor with argumentString="--diagnostics", validate=True. """ options = Options(argumentString="--diagnostics", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_211(self): """ Test constructor with argumentList=["-D", ], validate=True. """ options = Options(argumentList=["-D", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_212(self): """ Test constructor with argumentString="-D", validate=True. """ options = Options(argumentString="-D", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes at defaults. """ options1 = Options() options2 = Options() self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes filled in and same. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes filled in, help different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = False options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes filled in, version different. """ options1 = Options() options2 = Options() options1.help = True options1.version = False options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_005(self): """ Test comparison of two identical objects, all attributes filled in, verbose different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = False options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_006(self): """ Test comparison of two identical objects, all attributes filled in, quiet different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = False options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_007(self): """ Test comparison of two identical objects, all attributes filled in, config different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "whatever" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_008(self): """ Test comparison of two identical objects, all attributes filled in, full different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = False options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_009(self): """ Test comparison of two identical objects, all attributes filled in, logfile different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "stuff" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_010(self): """ Test comparison of two identical objects, all attributes filled in, owner different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("c", "d") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_011(self): """ Test comparison of two identical objects, all attributes filled in, mode different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0600 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_012(self): """ Test comparison of two identical objects, all attributes filled in, output different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = False options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_013(self): """ Test comparison of two identical objects, all attributes filled in, debug different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = False options1.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_014(self): """ Test comparison of two identical objects, all attributes filled in, stacktrace different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_015(self): """ Test comparison of two identical objects, all attributes filled in, managed different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = False options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_016(self): """ Test comparison of two identical objects, all attributes filled in, managedOnly different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = False options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_017(self): """ Test comparison of two identical objects, all attributes filled in, diagnostics different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0631 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = True options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) ########################### # Test buildArgumentList() ########################### def testBuildArgumentList_001(self): """Test with no values set, validate=False.""" options = Options() argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual([], argumentList) def testBuildArgumentList_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--verbose", ], argumentList) def testBuildArgumentList_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--quiet", ], argumentList) def testBuildArgumentList_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--config", "stuff", ], argumentList) def testBuildArgumentList_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--full", ], argumentList) def testBuildArgumentList_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--logfile", "bogus", ], argumentList) def testBuildArgumentList_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--owner", "ken:group", ], argumentList) def testBuildArgumentList_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--mode", "644", ], argumentList) def testBuildArgumentList_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--output", ], argumentList) def testBuildArgumentList_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--debug", ], argumentList) def testBuildArgumentList_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--stack", ], argumentList) def testBuildArgumentList_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["collect", ], argumentList) def testBuildArgumentList_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_018(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["collect", ], argumentList) def testBuildArgumentList_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--managed", ], argumentList) def testBuildArgumentList_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_037(self): """Test with managedOnly set, validate=False.""" options = Options() options.managedOnly = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--managed-only", ], argumentList) def testBuildArgumentList_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_039(self): """Test with all values set, actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_040(self): """Test with all values set, actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--diagnostics", ], argumentList) ############################# # Test buildArgumentString() ############################# def testBuildArgumentString_001(self): """Test with no values set, validate=False.""" options = Options() argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("", argumentString) def testBuildArgumentString_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--verbose ", argumentString) def testBuildArgumentString_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--quiet ", argumentString) def testBuildArgumentString_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--config "stuff" ', argumentString) def testBuildArgumentString_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--full ", argumentString) def testBuildArgumentString_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--logfile "bogus" ', argumentString) def testBuildArgumentString_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--owner "ken:group" ', argumentString) def testBuildArgumentString_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--mode 644 ', argumentString) def testBuildArgumentString_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--output ", argumentString) def testBuildArgumentString_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--debug ", argumentString) def testBuildArgumentString_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--stack ", argumentString) def testBuildArgumentString_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('"collect" ', argumentString) def testBuildArgumentString_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --managed --managed-only --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_018(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('"collect" ', argumentString) def testBuildArgumentString_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--managed ", argumentString) def testBuildArgumentString_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_037(self): """Test with full set, validate=False.""" options = Options() options.managedOnly = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--managed-only ", argumentString) def testBuildArgumentString_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_039(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_040(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--diagnostics ", argumentString) def testBuildArgumentString_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--diagnostics ", argumentString) ###################### # TestActionSet class ###################### class TestActionSet(unittest.TestCase): """Tests for the _ActionSet class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################################### # Test constructor, "index" order mode ####################################### def testActionSet_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testActionSet_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testActionSet_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testActionSet_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testActionSet_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testActionSet_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 50) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_070(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 50) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_072(self): """ Test with actions=[ all, one ], extensions=[ (one, index 50) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 50) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 50) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 150) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_077(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 150) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_079(self): """ Test with actions=[ all, one ], extensions=[ (one, index 150) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_080(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 150) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_081(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 150) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_082(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 250) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(250, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_083(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 250) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(250, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_084(self): """ Test with actions=[ store, one ], extensions=[ (one, index 250) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(250, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_085(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 250) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(250, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_086(self): """ Test with actions=[ all, one ], extensions=[ (one, index 250) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_087(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 250) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_088(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 250) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 350) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 350) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_091(self): """ Test with actions=[ store, one ], extensions=[ (one, index 350) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 350) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(350, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_093(self): """ Test with actions=[ all, one ], extensions=[ (one, index 350) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 350) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 350) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 450) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 450) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_098(self): """ Test with actions=[ store, one ], extensions=[ (one, index 450) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 450) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_100(self): """ Test with actions=[ all, one ], extensions=[ (one, index 450) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_101(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 450) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_102(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 450) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_103(self): """ Test with actions=[ one, one ], extensions=[ (one, index 450) ]. """ actions = [ "one", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(450, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_104(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_105(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_106(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHook) self.failUnlessEqual(None, actionSet.actionSet[6].postHook) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHook) self.failUnlessEqual(None, actionSet.actionSet[8].postHook) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testActionSet_107(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHook) self.failUnlessEqual(None, actionSet.actionSet[6].postHook) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHook) self.failUnlessEqual(None, actionSet.actionSet[8].postHook) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testActionSet_108(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ]. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_109(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_110(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_111(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_112(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_113(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("collect", "something"), actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_114(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("collect", "something1"), actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("collect", "something2"), actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_115(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_116(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_117(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_118(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_119(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_120(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre- and post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension1"), actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension2"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_121(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_123(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_124(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_125(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(PostActionHook("collect", "something"), actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_126(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_127(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_128(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something"), PostActionHook("stage", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) ############################################ # Test constructor, "dependency" order mode ############################################ def testDependencyMode_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testDependencyMode_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testDependencyMode_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testDependencyMode_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_070(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["purge", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_072(self): """ Test with actions=[ all, one ], extensions=[ (one, before collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, before collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, before collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, after collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, after collect) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_077(self): """ Test with actions=[ store, one ], extensions=[ (one, after collect) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, after collect) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_079(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_080(self): """ Test with actions=[ store, one ], extensions=[ (one, before stage ) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_081(self): """ Test with actions=[ purge, one ], extensions=[ (one, before stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_082(self): """ Test with actions=[ all, one ], extensions=[ (one, after collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_083(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_084(self): """ Test with actions=[ validate, one ], extensions=[ (one, after collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_085(self): """ Test with actions=[ collect, one ], extensions=[ (one, after stage) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_086(self): """ Test with actions=[ stage, one ], extensions=[ (one, after stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_087(self): """ Test with actions=[ store, one ], extensions=[ (one, after stage) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_088(self): """ Test with actions=[ purge, one ], extensions=[ (one, after stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, before store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, before store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_091(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, before store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_093(self): """ Test with actions=[ all, one ], extensions=[ (one, after stage) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after stage) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, after stage) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, after store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, after store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_098(self): """ Test with actions=[ store, one ], extensions=[ (one, after store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, after store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_100(self): """ Test with actions=[ collect, one ], extensions=[ (one, before purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_101(self): """ Test with actions=[ stage, one ], extensions=[ (one, before purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_102(self): """ Test with actions=[ store, one ], extensions=[ (one, before purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_103(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_104(self): """ Test with actions=[ all, one ], extensions=[ (one, after store) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_105(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after store) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_106(self): """ Test with actions=[ validate, one ], extensions=[ (one, after store) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_107(self): """ Test with actions=[ collect, one ], extensions=[ (one, after purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_108(self): """ Test with actions=[ stage, one ], extensions=[ (one, after purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_109(self): """ Test with actions=[ store, one ], extensions=[ (one, after purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_110(self): """ Test with actions=[ purge, one ], extensions=[ (one, after purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_111(self): """ Test with actions=[ all, one ], extensions=[ (one, after purge) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_112(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after purge) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_113(self): """ Test with actions=[ validate, one ], extensions=[ (one, after purge) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_114(self): """ Test with actions=[ one, one ], extensions=[ (one, after purge) ]. """ actions = [ "one", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_115(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_116(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_117(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], None) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHook) self.failUnlessEqual(None, actionSet.actionSet[6].postHook) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHook) self.failUnlessEqual(None, actionSet.actionSet[8].postHook) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_118(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(None, ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHook) self.failUnlessEqual(None, actionSet.actionSet[6].postHook) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHook) self.failUnlessEqual(None, actionSet.actionSet[8].postHook) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_119(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ]. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_120(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_121(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_122(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_123(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_124(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("collect", "something"), actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_125(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("collect", "something1"), actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("collect", "something2"), actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_126(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_127(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_128(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_129(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_130(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_131(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre- and post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension1"), actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension2"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_132(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_133(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_134(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_135(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_136(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(PostActionHook("collect", "something"), actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_137(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(PreActionHook("one", "extension"), actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_138(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_139(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something"), PostActionHook("stage", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(PostActionHook("one", "extension"), actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(PreActionHook("collect", "something"), actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_140(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions= [recursive loop]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(["one", ], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_141(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], and one extension for which a dependency does not exist. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "bogus", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) ######################################### # Test constructor, with managed peers ######################################### def testManagedPeer_001(self): """ Test with actions=[ collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_002(self): """ Test with actions=[ stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_003(self): """ Test with actions=[ store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_004(self): """ Test with actions=[ purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_005(self): """ Test with actions=[ all ], extensions=[], peers=None, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_006(self): """ Test with actions=[ rebuild ], extensions=[], peers=None, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_007(self): """ Test with actions=[ validate ], extensions=[], peers=None, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_008(self): """ Test with actions=[ collect, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_009(self): """ Test with actions=[ collect, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_010(self): """ Test with actions=[ collect, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_011(self): """ Test with actions=[ stage, collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_012(self): """ Test with actions=[ stage, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_013(self): """ Test with actions=[ stage, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_014(self): """ Test with actions=[ stage, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_015(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_016(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_017(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_018(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_019(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_020(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_021(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_022(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_023(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_024(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_025(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_026(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_027(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_028(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_029(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_030(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_031(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_032(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_033(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_034(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_035(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_036(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_037(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_038(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_039(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_040(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_041(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_042(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_043(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_044(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_045(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_046(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_047(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_048(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_049(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_050(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_051(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_052(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_053(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_054(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_055(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_056(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_057(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_058(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_059(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_060(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_061(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_062(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_063(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_064(self): """ Test with actions=[ collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_065(self): """ Test with actions=[ stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_066(self): """ Test with actions=[ store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_067(self): """ Test with actions=[ purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_068(self): """ Test with actions=[ all ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_069(self): """ Test with actions=[ rebuild ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_070(self): """ Test with actions=[ validate ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_071(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_072(self): """ Test with actions=[ collect, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_073(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_074(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_075(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_076(self): """ Test with actions=[ stage, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_077(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_078(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_079(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_080(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_081(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_082(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_083(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_084(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_085(self): """ Test with actions=[ collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_086(self): """ Test with actions=[ stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_087(self): """ Test with actions=[ store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_088(self): """ Test with actions=[ purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_089(self): """ Test with actions=[ all ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_090(self): """ Test with actions=[ rebuild ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_091(self): """ Test with actions=[ validate ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_092(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_093(self): """ Test with actions=[ collect, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_094(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_095(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_096(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_097(self): """ Test with actions=[ stage, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_098(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_099(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_100(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_101(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_102(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_103(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_104(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_105(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_106(self): """ Test with actions=[ collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_107(self): """ Test with actions=[ stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_108(self): """ Test with actions=[ store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_109(self): """ Test with actions=[ purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_110(self): """ Test with actions=[ all ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_111(self): """ Test with actions=[ rebuild ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_112(self): """ Test with actions=[ validate ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_113(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_114(self): """ Test with actions=[ collect, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_115(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_116(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_117(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_118(self): """ Test with actions=[ stage, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_119(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_120(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_121(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_123(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_124(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_125(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_126(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_127(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_128(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_129(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_130(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_131(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_132(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_133(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_134(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_135(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_136(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_137(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_138(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_139(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_140(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_141(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_142(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_143(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_144(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_145(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_146(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_147(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_148(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_149(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_150(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_151(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_152(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 6) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) self.failUnlessEqual(300, actionSet.actionSet[3].index) self.failUnlessEqual("store", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[3].function) self.failUnlessEqual(400, actionSet.actionSet[4].index) self.failUnlessEqual("purge", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[4].function) self.failUnlessEqual(400, actionSet.actionSet[5].index) self.failUnlessEqual("purge", actionSet.actionSet[5].name) self.failIf(actionSet.actionSet[5].remotePeers is None) self.failUnless(len(actionSet.actionSet[5].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_153(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_154(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_155(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_156(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_157(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_158(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_159(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_160(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_161(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHook) self.failUnlessEqual(None, actionSet.actionSet[1].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_162(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_163(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_164(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("one", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[2].function) self.failUnlessEqual(150, actionSet.actionSet[3].index) self.failUnlessEqual("one", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_165(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_166(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 6) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) self.failUnlessEqual(300, actionSet.actionSet[3].index) self.failUnlessEqual("store", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHook) self.failUnlessEqual(None, actionSet.actionSet[3].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[3].function) self.failUnlessEqual(400, actionSet.actionSet[4].index) self.failUnlessEqual("purge", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[4].function) self.failUnlessEqual(400, actionSet.actionSet[5].index) self.failUnlessEqual("purge", actionSet.actionSet[5].name) self.failIf(actionSet.actionSet[5].remotePeers is None) self.failUnless(len(actionSet.actionSet[5].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_167(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[4].index) self.failUnlessEqual("two", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[4].function) self.failUnlessEqual(200, actionSet.actionSet[5].index) self.failUnlessEqual("stage", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[5].function) self.failUnlessEqual(300, actionSet.actionSet[6].index) self.failUnlessEqual("store", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHook) self.failUnlessEqual(None, actionSet.actionSet[6].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(400, actionSet.actionSet[8].index) self.failUnlessEqual("purge", actionSet.actionSet[8].name) self.failIf(actionSet.actionSet[8].remotePeers is None) self.failUnless(len(actionSet.actionSet[8].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[8].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[8].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[8].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[8].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[8].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[8].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[8].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[8].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[8].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[8].remotePeers[1].cbackCommand) def testManagedPeer_168(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_169(self): """ Test to make sure that various options all seem to be pulled from the right places with mixed data. """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] options.backupUser = "userZ" options.rshCommand = "rshZ" options.cbackCommand = "cbackZ" peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, None, None, None, "cback", managed=True), RemotePeer("remote2", None, "ruser2", None, "rsh2", None, managed=True, managedActions=[ "stage", ]), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 10) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHook) self.failUnlessEqual(None, actionSet.actionSet[0].postHook) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHook) self.failUnlessEqual(None, actionSet.actionSet[2].postHook) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[4].index) self.failUnlessEqual("two", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHook) self.failUnlessEqual(None, actionSet.actionSet[4].postHook) self.failUnlessEqual(isfile, actionSet.actionSet[4].function) self.failUnlessEqual(200, actionSet.actionSet[5].index) self.failUnlessEqual("stage", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHook) self.failUnlessEqual(None, actionSet.actionSet[5].postHook) self.failUnlessEqual(executeStage, actionSet.actionSet[5].function) self.failUnlessEqual(200, actionSet.actionSet[6].index) self.failUnlessEqual("stage", actionSet.actionSet[6].name) self.failIf(actionSet.actionSet[6].remotePeers is None) self.failUnless(len(actionSet.actionSet[6].remotePeers) == 1) self.failUnlessEqual("remote2", actionSet.actionSet[6].remotePeers[0].name) self.failUnlessEqual("ruser2", actionSet.actionSet[6].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[6].remotePeers[0].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[6].remotePeers[0].rshCommand) self.failUnlessEqual("cbackZ", actionSet.actionSet[6].remotePeers[0].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[7].index) self.failUnlessEqual("store", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHook) self.failUnlessEqual(None, actionSet.actionSet[7].postHook) self.failUnlessEqual(executeStore, actionSet.actionSet[7].function) self.failUnlessEqual(400, actionSet.actionSet[8].index) self.failUnlessEqual("purge", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHook) self.failUnlessEqual(None, actionSet.actionSet[8].postHook) self.failUnlessEqual(executePurge, actionSet.actionSet[8].function) self.failUnlessEqual(400, actionSet.actionSet[9].index) self.failUnlessEqual("purge", actionSet.actionSet[9].name) self.failIf(actionSet.actionSet[9].remotePeers is None) self.failUnless(len(actionSet.actionSet[9].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[9].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[9].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[9].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[9].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[9].remotePeers[0].cbackCommand) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestOptions, 'test'), unittest.makeSuite(TestActionSet, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/spantests.py0000664000175000017500000001267311415165677022004 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: spantests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests span tool functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/tools/span.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in tools/span.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SPANTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.testutil import captureOutput from CedarBackup2.tools.span import _usage, _version from CedarBackup2.tools.span import Options ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) ######################## # TestSpanOptions class ######################## class TestSpanOptions(unittest.TestCase): """Tests for the SpanOptions class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestSpanOptions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/utiltests.py0000664000175000017500000042615211645152363022012 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: utiltests.py 1023 2011-10-11 23:44:50Z pronovic $ # Purpose : Tests utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0322,C0324 ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a UTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import sys import os import unittest import tempfile import time import logging from os.path import isdir from CedarBackup2.testutil import findResources, removedir, extractTar, buildPath, captureOutput from CedarBackup2.testutil import platformHasEcho, platformWindows, platformCygwin, platformSupportsLinks from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList from CedarBackup2.util import RestrictedContentList, RegexMatchList, RegexList from CedarBackup2.util import DirectedGraph, PathResolverSingleton, Diagnostics, parseCommaSeparatedString from CedarBackup2.util import sortDict, resolveCommand, executeCommand, getFunctionReference, encodePath from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.util import displayBytes, deriveDayOfWeek, isStartOfWeek, dereferenceLink from CedarBackup2.util import buildNormalizedPath, splitCommandLine, nullDevice ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "lotsoflines.py", "tree10.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestUnorderedList class ########################## class TestUnorderedList(unittest.TestCase): """Tests for the UnorderedList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################## # Test unordered list comparisons ################################## def testComparison_001(self): """ Test two empty lists. """ list1 = UnorderedList() list2 = UnorderedList() self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_002(self): """ Test empty vs. non-empty list. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_003(self): """ Test two non-empty lists, completely different contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append('a') list2.append('b') list2.append('c') list2.append('d') self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual(['a','b','c','d', ], list2) self.failUnlessEqual(['b','c','d','a', ], list2) self.failUnlessEqual(['c','d','a','b', ], list2) self.failUnlessEqual(['d','a','b','c', ], list2) self.failUnlessEqual(list2, ['d','c','b','a', ]) self.failUnlessEqual(list2, ['c','b','a','d', ]) self.failUnlessEqual(list2, ['b','a','d','c', ]) self.failUnlessEqual(list2, ['a','d','c','b', ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_004(self): """ Test two non-empty lists, different but overlapping contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(4) list2.append('a') list2.append('b') self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([3,4,'a','b', ], list2) self.failUnlessEqual([4,'a','b',3, ], list2) self.failUnlessEqual(['a','b',3,4, ], list2) self.failUnlessEqual(['b',3,4,'a', ], list2) self.failUnlessEqual(list2, ['b','a',4,3, ]) self.failUnlessEqual(list2, ['a',4,3,'b', ]) self.failUnlessEqual(list2, [4,3,'b','a', ]) self.failUnlessEqual(list2, [3,'b','a',4, ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_005(self): """ Test two non-empty lists, exactly the same contents, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(1) list2.append(2) list2.append(3) list2.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([1,2,3,4, ], list2) self.failUnlessEqual([2,3,4,1, ], list2) self.failUnlessEqual([3,4,1,2, ], list2) self.failUnlessEqual([4,1,2,3, ], list2) self.failUnlessEqual(list2, [4,3,2,1, ]) self.failUnlessEqual(list2, [3,2,1,4, ]) self.failUnlessEqual(list2, [2,1,4,3, ]) self.failUnlessEqual(list2, [1,4,3,2, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_006(self): """ Test two non-empty lists, exactly the same contents, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([1,2,3,4, ], list2) self.failUnlessEqual([2,3,4,1, ], list2) self.failUnlessEqual([3,4,1,2, ], list2) self.failUnlessEqual([4,1,2,3, ], list2) self.failUnlessEqual(list2, [4,3,2,1, ]) self.failUnlessEqual(list2, [3,2,1,4, ]) self.failUnlessEqual(list2, [2,1,4,3, ]) self.failUnlessEqual(list2, [1,4,3,2, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_007(self): """ Test two non-empty lists, exactly the same contents, some duplicates, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(1) list2.append(2) list2.append(2) list2.append(3) list2.append(4) list2.append(4) self.failUnlessEqual([1,2,2,3,4,4, ], list1) self.failUnlessEqual([2,2,3,4,1,4, ], list1) self.failUnlessEqual([2,3,4,1,4,2, ], list1) self.failUnlessEqual([2,4,1,4,2,3, ], list1) self.failUnlessEqual(list1, [1,2,2,3,4,4, ]) self.failUnlessEqual(list1, [2,2,3,4,1,4, ]) self.failUnlessEqual(list1, [2,3,4,1,4,2, ]) self.failUnlessEqual(list1, [2,4,1,4,2,3, ]) self.failUnlessEqual([1,2,2,3,4,4, ], list2) self.failUnlessEqual([2,2,3,4,1,4, ], list2) self.failUnlessEqual([2,3,4,1,4,2, ], list2) self.failUnlessEqual([2,4,1,4,2,3, ], list2) self.failUnlessEqual(list2, [1,2,2,3,4,4, ]) self.failUnlessEqual(list2, [2,2,3,4,1,4, ]) self.failUnlessEqual(list2, [2,3,4,1,4,2, ]) self.failUnlessEqual(list2, [2,4,1,4,2,3, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_008(self): """ Test two non-empty lists, exactly the same contents, some duplicates, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(2) list2.append(4) list2.append(4) self.failUnlessEqual([1,2,2,3,4,4, ], list1) self.failUnlessEqual([2,2,3,4,1,4, ], list1) self.failUnlessEqual([2,3,4,1,4,2, ], list1) self.failUnlessEqual([2,4,1,4,2,3, ], list1) self.failUnlessEqual(list1, [1,2,2,3,4,4, ]) self.failUnlessEqual(list1, [2,2,3,4,1,4, ]) self.failUnlessEqual(list1, [2,3,4,1,4,2, ]) self.failUnlessEqual(list1, [2,4,1,4,2,3, ]) self.failUnlessEqual([1,2,2,3,4,4, ], list2) self.failUnlessEqual([2,2,3,4,1,4, ], list2) self.failUnlessEqual([2,3,4,1,4,2, ], list2) self.failUnlessEqual([2,4,1,4,2,3, ], list2) self.failUnlessEqual(list2, [1,2,2,3,4,4, ]) self.failUnlessEqual(list2, [2,2,3,4,1,4, ]) self.failUnlessEqual(list2, [2,3,4,1,4,2, ]) self.failUnlessEqual(list2, [2,4,1,4,2,3, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) ############################# # TestAbsolutePathList class ############################# class TestAbsolutePathList(unittest.TestCase): """Tests for the AbsolutePathList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid absolute path. """ list1 = AbsolutePathList() list1.append("/path/to/something/absolute") self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.append("/path/to/something/else") self.failUnlessEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") self.failUnlessEqual(list1[1], "/path/to/something/else") def testListOperations_002(self): """ Test append() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "path/to/something/relative") self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid absolute path. """ list1 = AbsolutePathList() list1.insert(0, "/path/to/something/absolute") self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.insert(0, "/path/to/something/else") self.failUnlessEqual(list1, [ "/path/to/something/else", "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/else") self.failUnlessEqual(list1[1], "/path/to/something/absolute") def testListOperations_004(self): """ Test insert() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessRaises(ValueError, list1.insert, 0, "path/to/something/relative") def testListOperations_005(self): """ Test extend() for a valid absolute path. """ list1 = AbsolutePathList() list1.extend(["/path/to/something/absolute", ]) self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.extend(["/path/to/something/else", ]) self.failUnlessEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") self.failUnlessEqual(list1[1], "/path/to/something/else") def testListOperations_006(self): """ Test extend() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "path/to/something/relative", ]) self.failUnlessEqual(list1, []) ########################### # TestObjectTypeList class ########################### class TestObjectTypeList(unittest.TestCase): """Tests for the ObjectTypeList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.append("string") self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.append("string2") self.failUnlessEqual(list1, [ "string", "string2", ]) self.failUnlessEqual(list1[0], "string") self.failUnlessEqual(list1[1], "string2") def testListOperations_002(self): """ Test append() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, 1) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.insert(0, "string") self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.insert(0, "string2") self.failUnlessEqual(list1, [ "string2", "string", ]) self.failUnlessEqual(list1[0], "string2") self.failUnlessEqual(list1[1], "string") def testListOperations_004(self): """ Test insert() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, AbsolutePathList()) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.extend(["string", ]) self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.extend(["string2", ]) self.failUnlessEqual(list1, [ "string", "string2", ]) self.failUnlessEqual(list1[0], "string") self.failUnlessEqual(list1[1], "string2") def testListOperations_006(self): """ Test extend() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ 12.0, ]) self.failUnlessEqual(list1, []) ################################## # TestRestrictedContentList class ################################## class TestRestrictedContentList(unittest.TestCase): """Tests for the RestrictedContentList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("b") self.failUnlessEqual(list1, [ "a", "b", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") list1.append("c") self.failUnlessEqual(list1, [ "a", "b", "c", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "c") def testListOperations_002(self): """ Test append() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "d") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, 1) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.append, UnorderedList()) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "b") self.failUnlessEqual(list1, [ "b", "a", ]) self.failUnlessEqual(list1[0], "b") self.failUnlessEqual(list1[1], "a") list1.insert(0, "c") self.failUnlessEqual(list1, [ "c", "b", "a", ]) self.failUnlessEqual(list1[0], "c") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "a") def testListOperations_004(self): """ Test insert() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "d") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, 1) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.insert, 0, UnorderedList()) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["b", ]) self.failUnlessEqual(list1, [ "a", "b", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") list1.extend(["c", ]) self.failUnlessEqual(list1, [ "a", "b", "c", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "c") def testListOperations_006(self): """ Test extend() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, ["d", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [1, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.extend, [ UnorderedList(), ]) self.failUnlessEqual(list1, []) ########################### # TestRegexMatchList class ########################### class TestRegexMatchList(unittest.TestCase): """Tests for the RegexMatchList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("1") self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.append("abcd12345") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") list1.append("") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", "", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") self.failUnlessEqual(list1[3], "") def testListOperations_002(self): """ Test append() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.append, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, None) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "1") self.failUnlessEqual(list1, [ "1", "a", ]) self.failUnlessEqual(list1[0], "1") self.failUnlessEqual(list1[1], "a") list1.insert(0, "abcd12345") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", ]) self.failUnlessEqual(list1[0], "abcd12345") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "a") list1.insert(0, "") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", "", ]) self.failUnlessEqual(list1[0], "") self.failUnlessEqual(list1[1], "abcd12345") self.failUnlessEqual(list1[2], "1") self.failUnlessEqual(list1[3], "a") def testListOperations_004(self): """ Test insert() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.insert, 0, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, None) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["1", ]) self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") list1.extend(["", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", "", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") self.failUnlessEqual(list1[3], "") def testListOperations_006(self): """ Test extend() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "A", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "ABC", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.extend, [ 12, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "KEN_12", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ None, ]) self.failUnlessEqual(list1, []) def testListOperations_007(self): """ Test append() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("1") self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.append("abcd12345") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") def testListOperations_008(self): """ Test append() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.append, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, None) self.failUnlessEqual(list1, []) def testListOperations_009(self): """ Test insert() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "1") self.failUnlessEqual(list1, [ "1", "a", ]) self.failUnlessEqual(list1[0], "1") self.failUnlessEqual(list1[1], "a") list1.insert(0, "abcd12345") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", ]) self.failUnlessEqual(list1[0], "abcd12345") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "a") def testListOperations_010(self): """ Test insert() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.insert, 0, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, None) self.failUnlessEqual(list1, []) def testListOperations_011(self): """ Test extend() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["1", ]) self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") def testListOperations_012(self): """ Test extend() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "A", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "ABC", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.extend, [ 12, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "KEN_12", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ None, ]) self.failUnlessEqual(list1, []) ###################### # TestRegexList class ###################### class TestRegexList(unittest.TestCase): """Tests for the RegexList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid regular expresson. """ list1 = RegexList() list1.append(".*\.jpg") self.failUnlessEqual(list1, [ ".*\.jpg", ]) self.failUnlessEqual(list1[0], ".*\.jpg") list1.append("[a-zA-Z0-9]*") self.failUnlessEqual(list1, [ ".*\.jpg", "[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1[0], ".*\.jpg") self.failUnlessEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_002(self): """ Test append() for an invalid regular expression. """ list1 = RegexList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "*.jpg") self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid regular expression. """ list1 = RegexList() list1.insert(0, ".*\.jpg") self.failUnlessEqual(list1, [ ".*\.jpg", ]) self.failUnlessEqual(list1[0], ".*\.jpg") list1.insert(0, "[a-zA-Z0-9]*") self.failUnlessEqual(list1, [ "[a-zA-Z0-9]*", ".*\.jpg", ]) self.failUnlessEqual(list1[0], "[a-zA-Z0-9]*") self.failUnlessEqual(list1[1], ".*\.jpg") def testListOperations_004(self): """ Test insert() for an invalid regular expression. """ list1 = RegexList() self.failUnlessRaises(ValueError, list1.insert, 0, "*.jpg") def testListOperations_005(self): """ Test extend() for a valid regular expression. """ list1 = RegexList() list1.extend([".*\.jpg", ]) self.failUnlessEqual(list1, [ ".*\.jpg", ]) self.failUnlessEqual(list1[0], ".*\.jpg") list1.extend(["[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1, [ ".*\.jpg", "[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1[0], ".*\.jpg") self.failUnlessEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_006(self): """ Test extend() for an invalid regular expression. """ list1 = RegexList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "*.jpg", ]) self.failUnlessEqual(list1, []) ########################## # TestDirectedGraph class ########################## class TestDirectedGraph(unittest.TestCase): """Tests for the DirectedGraph class.""" ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = DirectedGraph("test") obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with a valid name filled in. """ graph = DirectedGraph("Ken") self.failUnlessEqual("Ken", graph.name) def testConstructor_002(self): """ Test constructor with a C{None} name filled in. """ self.failUnlessRaises(ValueError, DirectedGraph, None) ########################## # Test depth first search ########################## def testTopologicalSort_001(self): """ Empty graph. """ graph = DirectedGraph("test") path = graph.topologicalSort() self.failUnlessEqual([], path) def testTopologicalSort_002(self): """ Graph with 1 vertex, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") path = graph.topologicalSort() self.failUnlessEqual([ "1", ], path) def testTopologicalSort_003(self): """ Graph with 2 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", ], path) def testTopologicalSort_004(self): """ Graph with 3 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_005(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") path = graph.topologicalSort() self.failUnlessEqual([ "4", "2", "1", "3", ], path) def testTopologicalSort_006(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") graph.createVertex("5") path = graph.topologicalSort() self.failUnlessEqual([ "5", "4", "2", "1", "3", ], path) def testTopologicalSort_007(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_008(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_009(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_010(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_011(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_012(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_013(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_014(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_015(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_016(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_017(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_018(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_019(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_020(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_021(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_022(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_023(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_024(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_025(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_026(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_027(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_028(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_029(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_030(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_031(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_032(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_033(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_034(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_035(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_036(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_037(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_038(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_039(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_040(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_041(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_042(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_043(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_044(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_045(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_046(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_047(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_048(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_049(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_050(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_051(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_052(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_053(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_054(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2" ], path) def testTopologicalSort_055(self): """ Graph with 1 vertex, with an edge to itself (1->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createEdge("1", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_056(self): """ Graph with 2 vertices, each with an edge to itself (1->1, 2->2). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "1") graph.createEdge("2", "2") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_057(self): """ Graph with 3 vertices, each with an edge to itself (1->1, 2->2, 3->3). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "1") graph.createEdge("2", "2") graph.createEdge("3", "3") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_058(self): """ Graph with 3 vertices, in a loop (1->2->3->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") graph.createEdge("3", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_059(self): """ Graph with 5 vertices, (2, 1->3, 1->4, 1->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_060(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_061(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_062(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_063(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "5", "3", "4", ], path) def testTopologicalSort_064(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2, 3->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") graph.createEdge("3", "5") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", "5", "4", ], path) def testTopologicalSort_065(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 5->1) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("5", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) ################################## # TestPathResolverSingleton class ################################## class TestPathResolverSingleton(unittest.TestCase): """Tests for the PathResolverSingleton class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ########################## # Test singleton behavior ########################## def testBehavior_001(self): """ Check behavior of constructor around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance = PathResolverSingleton() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance is PathResolverSingleton._instance) self.failUnlessRaises(RuntimeError, PathResolverSingleton) PathResolverSingleton._instance = None instance = PathResolverSingleton() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance is PathResolverSingleton._instance) def testBehavior_002(self): """ Check behavior of getInstance() around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance1 = PathResolverSingleton.getInstance() instance2 = PathResolverSingleton.getInstance() instance3 = PathResolverSingleton.getInstance() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance1 is PathResolverSingleton._instance) self.failUnless(instance1 is instance2) self.failUnless(instance1 is instance3) PathResolverSingleton._instance = None PathResolverSingleton() instance4 = PathResolverSingleton.getInstance() instance5 = PathResolverSingleton.getInstance() instance6 = PathResolverSingleton.getInstance() self.failUnless(instance1 is not instance4) self.failUnless(instance4 is PathResolverSingleton._instance) self.failUnless(instance4 is instance5) self.failUnless(instance4 is instance6) PathResolverSingleton._instance = None instance7 = PathResolverSingleton.getInstance() instance8 = PathResolverSingleton.getInstance() instance9 = PathResolverSingleton.getInstance() self.failUnless(instance1 is not instance7) self.failUnless(instance4 is not instance7) self.failUnless(instance7 is PathResolverSingleton._instance) self.failUnless(instance7 is instance8) self.failUnless(instance7 is instance9) ############################ # Test lookup functionality ############################ def testLookup_001(self): """ Test that lookup() always returns default when singleton is empty. """ PathResolverSingleton._instance = None instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.failUnlessEqual(result, None) result = instance.lookup("whatever", None) self.failUnlessEqual(result, None) result = instance.lookup("other") self.failUnlessEqual(result, None) result = instance.lookup("other", "default") self.failUnlessEqual(result, "default") def testLookup_002(self): """ Test that lookup() returns proper values when singleton is not empty. """ mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } PathResolverSingleton._instance = None singleton = PathResolverSingleton() singleton.fill(mappings) instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.failUnlessEqual(result, None) result = instance.lookup("whatever", None) self.failUnlessEqual(result, None) result = instance.lookup("other") self.failUnlessEqual(result, None) result = instance.lookup("other", "default") self.failUnlessEqual(result, "default") result = instance.lookup("one") self.failUnlessEqual(result, "/path/to/one") result = instance.lookup("one", None) self.failUnlessEqual(result, "/path/to/one") result = instance.lookup("two", None) self.failUnlessEqual(result, "/path/to/two") result = instance.lookup("two", "default") self.failUnlessEqual(result, "/path/to/two") ######################## # TestDiagnostics class ######################## class TestDiagnostics(unittest.TestCase): """Tests for the Diagnostics class.""" def testMethods_001(self): """ Test the version attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.version is None) self.failIfEqual("", diagnostics.version) def testMethods_002(self): """ Test the interpreter attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.interpreter is None) self.failIfEqual("", diagnostics.interpreter) def testMethods_003(self): """ Test the platform attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.platform is None) self.failIfEqual("", diagnostics.platform) def testMethods_004(self): """ Test the encoding attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.encoding is None) self.failIfEqual("", diagnostics.encoding) def testMethods_005(self): """ Test the locale attribute. """ # pylint: disable=W0104 diagnostics = Diagnostics() diagnostics.locale # might not be set, so just make sure method doesn't fail def testMethods_006(self): """ Test the getValues() method. """ diagnostics = Diagnostics() values = diagnostics.getValues() self.failUnlessEqual(diagnostics.version, values['version']) self.failUnlessEqual(diagnostics.interpreter, values['interpreter']) self.failUnlessEqual(diagnostics.platform, values['platform']) self.failUnlessEqual(diagnostics.encoding, values['encoding']) self.failUnlessEqual(diagnostics.locale, values['locale']) self.failUnlessEqual(diagnostics.timestamp, values['timestamp']) def testMethods_007(self): """ Test the _buildDiagnosticLines() method. """ values = Diagnostics().getValues() lines = Diagnostics()._buildDiagnosticLines() self.failUnlessEqual(len(values), len(lines)) def testMethods_008(self): """ Test the printDiagnostics() method. """ captureOutput(Diagnostics().printDiagnostics) def testMethods_009(self): """ Test the logDiagnostics() method. """ logger = logging.getLogger("CedarBackup2.test") Diagnostics().logDiagnostics(logger.info) def testMethods_010(self): """ Test the timestamp attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.timestamp is None) self.failIfEqual("", diagnostics.timestamp) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def getTempfile(self): """Gets a path to a temporary file on disk.""" (fd, name) = tempfile.mkstemp(dir=self.tmpdir) try: os.close(fd) except: pass return name def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ################## # Test sortDict() ################## def testSortDict_001(self): """ Test for empty dictionary. """ d = {} result = sortDict(d) self.failUnlessEqual([], result) def testSortDict_002(self): """ Test for dictionary with one item. """ d = {'a':1} result = sortDict(d) self.failUnlessEqual(['a', ], result) def testSortDict_003(self): """ Test for dictionary with two items, same value. """ d = {'a':1, 'b':1, } result = sortDict(d) self.failUnlessEqual(['a', 'b', ], result) def testSortDict_004(self): """ Test for dictionary with two items, different values. """ d = {'a':1, 'b':2, } result = sortDict(d) self.failUnlessEqual(['a', 'b', ], result) def testSortDict_005(self): """ Test for dictionary with many items, same and different values. """ d = {'rebuild': 0, 'purge': 400, 'collect': 100, 'validate': 0, 'store': 300, 'stage': 200} result = sortDict(d) self.failUnlessEqual(['rebuild', 'validate', 'collect', 'stage', 'store', 'purge', ], result) ############################## # Test getFunctionReference() ############################## def testGetFunctionReference_001(self): """ Check that the search works within "standard" Python namespace. """ module = "os.path" function = "isdir" reference = getFunctionReference(module, function) self.failUnless(isdir is reference) def testGetFunctionReference_002(self): """ Check that the search works for things within CedarBackup2. """ module = "CedarBackup2.util" function = "executeCommand" reference = getFunctionReference(module, function) self.failUnless(executeCommand is reference) ######################## # Test resolveCommand() ######################## def testResolveCommand_001(self): """ Test that the command is echoed back unchanged when singleton is empty. """ PathResolverSingleton._instance = None command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) def testResolveCommand_002(self): """ Test that the command is echoed back unchanged when mapping is not found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) def testResolveCommand_003(self): """ Test that the command is echoed back changed appropriately when mapping is found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "one", ] expected = [ "/path/to/one", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "two", ] expected = [ "/path/to/two", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = ["/path/to/two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) ######################## # Test executeCommand() ######################## def testExecuteCommand_001(self): """ Execute a command that should succeed, no arguments, returnOutput=False Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_002(self): """ Execute a command that should succeed, one argument, returnOutput=False Command-line: python -V """ command=["python", ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_003(self): """ Execute a command that should succeed, two arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_004(self): """ Execute a command that should succeed, three arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_005(self): """ Execute a command that should succeed, four arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_006(self): """ Execute a command that should fail, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_007(self): """ Execute a command that should fail, more arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_008(self): """ Execute a command that should succeed, no arguments, returnOutput=True Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_009(self): """ Execute a command that should succeed, one argument, returnOutput=True Command-line: python -V """ command=["python", ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_010(self): """ Execute a command that should succeed, two arguments, returnOutput=True Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=["python", ] args=["-c", "import sys; print ''; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_011(self): """ Execute a command that should succeed, three arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=["python", ] args=["-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_012(self): """ Execute a command that should succeed, four arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=["python", ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_013(self): """ Execute a command that should fail, returnOutput=True Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=["python", ] args=["-c", "import sys; print ''; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_014(self): """ Execute a command that should fail, more arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=["python", ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_015(self): """ Execute a command that should succeed, no arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_016(self): """ Execute a command that should succeed, one argument, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=["python", "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_017(self): """ Execute a command that should succeed, two arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_018(self): """ Execute a command that should succeed, three arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_019(self): """ Execute a command that should succeed, four arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_020(self): """ Execute a command that should fail, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_021(self): """ Execute a command that should fail, more arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_022(self): """ Execute a command that should succeed, no arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_023(self): """ Execute a command that should succeed, one argument, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=["python", "-V"] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_024(self): """ Execute a command that should succeed, two arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=["python", "-c", "import sys; print ''; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_025(self): """ Execute a command that should succeed, three arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=["python", "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_026(self): """ Execute a command that should succeed, four arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_027(self): """ Execute a command that should fail, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=["python", "-c", "import sys; print ''; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_028(self): """ Execute a command that should fail, more arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_030(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_031(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Command-line: python -V """ command=["python", ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_032(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_033(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_034(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_035(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_036(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=["python", ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_037(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_038(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Command-line: python -V """ command=["python", ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(0, len(output)) def testExecuteCommand_039(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=["python", ] args=["-c", "import sys; print ''; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_040(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=["python", ] args=["-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_041(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=["python", ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_042(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=["python", ] args=["-c", "import sys; print ''; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_043(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=["python", ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_044(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_045(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=["python", "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_046(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_047(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_048(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_049(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_050(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=["python", "-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_051(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_052(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=["python", "-V"] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(0, len(output)) def testExecuteCommand_053(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=["python", "-c", "import sys; print ''; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_054(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=["python", "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_055(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_056(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=["python", "-c", "import sys; print ''; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_057(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_058(self): """ Execute a command that should succeed, no arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_059(self): """ Execute a command that should succeed, one argument, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=["python", "-V"] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_060(self): """ Execute a command that should succeed, two arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=["python", "-c", "import sys; print ''; sys.exit(0)", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_061(self): """ Execute a command that should succeed, three arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=["python", "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_062(self): """ Execute a command that should succeed, four arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_063(self): """ Execute a command that should fail, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=["python", "-c", "import sys; print ''; sys.exit(1)", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failIfEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_064(self): """ Execute a command that should fail, more arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=["python", "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failIfEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_065(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "stdout", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_066(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "stdout", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_067(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "stderr", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(0, length) def testExecuteCommand_068(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "stderr", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_069(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "both", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_070(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=["python", lotsoflines, "both", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000*2, length) #################### # Test encodePath() #################### def testEncodePath_002(self): """ Test with a simple string, empty. """ path = "" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_003(self): """ Test with an simple string, an ascii word. """ path = "whatever" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_004(self): """ Test with simple string, a complete path. """ path = "/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_005(self): """ Test with simple string, a non-ascii path. """ path = "\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_006(self): """ Test with a simple string, empty. """ path = u"" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_007(self): """ Test with an simple string, an ascii word. """ path = u"whatever" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_008(self): """ Test with simple string, a complete path. """ path = u"/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_009(self): """ Test with simple string, a non-ascii path. The result is different for a UTF-8 encoding than other non-ANSI encodings. However, opening the original path and then the encoded path seems to result in the exact same file on disk, so the test is valid. """ encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() if not platformCygwin() and encoding != 'mbcs' and encoding.find("ANSI") != 0: # test can't work on some filesystems path = u"\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) if encoding.upper() == "UTF-8": # apparently, some platforms have "utf-8", some have "UTF-8" self.failUnlessEqual('\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac', safePath) else: self.failUnlessEqual("\xe2\x99\xaa\xe2\x99\xac", safePath) ##################### # Test convertSize() ###################### def testConvertSize_001(self): """ Test valid conversion from bytes to bytes. """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = 10.0 result = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(result, size) def testConvertSize_002(self): """ Test valid conversion from sectors to bytes and back. """ fromUnit = UNIT_SECTORS toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*2048, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_003(self): """ Test valid conversion from kbytes to bytes and back. """ fromUnit = UNIT_KBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_004(self): """ Test valid conversion from mbytes to bytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_005(self): """ Test valid conversion from gbytes to bytes and back. """ fromUnit = UNIT_GBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_006(self): """ Test valid conversion from mbytes to kbytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_KBYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(size*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_007(self): """ Test with an invalid from unit (None). """ fromUnit = None toUnit = UNIT_BYTES size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_008(self): """ Test with an invalid from unit. """ fromUnit = 333 toUnit = UNIT_BYTES size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_009(self): """ Test with an invalid to unit (None) """ fromUnit = UNIT_BYTES toUnit = None size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_010(self): """ Test with an invalid to unit. """ fromUnit = UNIT_BYTES toUnit = "ken" size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_011(self): """ Test with an invalid quantity (None) """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = None self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_012(self): """ Test with an invalid quantity (not a floating point). """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = "blech" self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) #################### # Test nullDevice() ##################### def testNullDevice_001(self): """ Test that the function behaves sensibly on Windows and non-Windows platforms. """ device = nullDevice() if platformWindows(): self.failUnlessEqual("NUL", device.upper()) else: self.failUnlessEqual("/dev/null", device) ###################### # Test displayBytes() ###################### def testDisplayBytes_001(self): """ Test display for a positive value < 1 KB """ bytes = 12 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("12 bytes", result) result = displayBytes(bytes, 3) self.failUnlessEqual("12 bytes", result) def testDisplayBytes_002(self): """ Test display for a negative value < 1 KB """ bytes = -12 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-12 bytes", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-12 bytes", result) def testDisplayBytes_003(self): """ Test display for a positive value = 1kB """ bytes = 1024 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 kB", result) def testDisplayBytes_004(self): """ Test display for a positive value >= 1kB """ bytes = 5678 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("5.54 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("5.545 kB", result) def testDisplayBytes_005(self): """ Test display for a negative value >= 1kB """ bytes = -5678 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-5.54 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-5.545 kB", result) def testDisplayBytes_006(self): """ Test display for a positive value = 1MB """ bytes = 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 MB", result) def testDisplayBytes_007(self): """ Test display for a positive value >= 1MB """ bytes = 72372224 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("69.02 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("69.020 MB", result) def testDisplayBytes_008(self): """ Test display for a negative value >= 1MB """ bytes = -72372224.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-69.02 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-69.020 MB", result) def testDisplayBytes_009(self): """ Test display for a positive value = 1GB """ bytes = 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 GB", result) def testDisplayBytes_010(self): """ Test display for a positive value >= 1GB """ bytes = 4.4 * 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("4.40 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("4.400 GB", result) def testDisplayBytes_011(self): """ Test display for a negative value >= 1GB """ bytes = -1234567891011 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-1149.78 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-1149.781 GB", result) def testDisplayBytes_012(self): """ Test display with an invalid quantity (None). """ bytes = None # pylint: disable=W0622 self.failUnlessRaises(ValueError, displayBytes, bytes) def testDisplayBytes_013(self): """ Test display with an invalid quantity (not a floating point). """ bytes = "ken" # pylint: disable=W0622 self.failUnlessRaises(ValueError, displayBytes, bytes) ######################### # Test deriveDayOfWeek() ######################### def testDeriveDayOfWeek_001(self): """ Test for valid day names. """ self.failUnlessEqual(0, deriveDayOfWeek("monday")) self.failUnlessEqual(1, deriveDayOfWeek("tuesday")) self.failUnlessEqual(2, deriveDayOfWeek("wednesday")) self.failUnlessEqual(3, deriveDayOfWeek("thursday")) self.failUnlessEqual(4, deriveDayOfWeek("friday")) self.failUnlessEqual(5, deriveDayOfWeek("saturday")) self.failUnlessEqual(6, deriveDayOfWeek("sunday")) def testDeriveDayOfWeek_002(self): """ Test for invalid day names. """ self.failUnlessEqual(-1, deriveDayOfWeek("bogus")) ####################### # Test isStartOfWeek() ####################### def testIsStartOfWeek001(self): """ Test positive case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("monday") elif day == 1: result = isStartOfWeek("tuesday") elif day == 2: result = isStartOfWeek("wednesday") elif day == 3: result = isStartOfWeek("thursday") elif day == 4: result = isStartOfWeek("friday") elif day == 5: result = isStartOfWeek("saturday") elif day == 6: result = isStartOfWeek("sunday") self.failUnlessEqual(True, result) def testIsStartOfWeek002(self): """ Test negative case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("friday") elif day == 1: result = isStartOfWeek("saturday") elif day == 2: result = isStartOfWeek("sunday") elif day == 3: result = isStartOfWeek("monday") elif day == 4: result = isStartOfWeek("tuesday") elif day == 5: result = isStartOfWeek("wednesday") elif day == 6: result = isStartOfWeek("thursday") self.failUnlessEqual(False, result) ############################# # Test buildNormalizedPath() ############################# def testBuildNormalizedPath001(self): """ Test for a None path. """ self.failUnlessRaises(ValueError, buildNormalizedPath, None) def testBuildNormalizedPath002(self): """ Test for an empty path. """ path = "" expected = "" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath003(self): """ Test for "." """ path = "." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath004(self): """ Test for ".." """ path = ".." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath005(self): """ Test for "..........." """ path = ".........." expected = "_........." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath006(self): """ Test for "/" """ path = "/" expected = "-" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath007(self): """ Test for "\\" """ path = "\\" expected = "-" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath008(self): """ Test for "/." """ path = "/." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath009(self): """ Test for "/.." """ path = "/.." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath010(self): """ Test for "/..." """ path = "/..." expected = "_.." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath011(self): """ Test for "\." """ path = r"\." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath012(self): """ Test for "\.." """ path = r"\.." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath013(self): """ Test for "\..." """ path = r"\..." expected = "_.." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath014(self): """ Test for "/var/log/apache/httpd.log.1" """ path = "/var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath015(self): """ Test for "var/log/apache/httpd.log.1" """ path = "var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath016(self): """ Test for "\\var/log/apache\\httpd.log.1" """ path = "\\var/log/apache\\httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath017(self): """ Test for "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." """ path = "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." expected = "Big_Nasty_Base_Path_With_Spaces-something-else-space_s-file.__log___.2_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) ########################## # Test splitCommandLine() ########################## def testSplitCommandLine_001(self): """ Test for a None command line. """ commandLine = None self.failUnlessRaises(ValueError, splitCommandLine, commandLine) def testSplitCommandLine_002(self): """ Test for an empty command line. """ commandLine = "" result = splitCommandLine(commandLine) self.failUnlessEqual([], result) def testSplitCommandLine_003(self): """ Test for a command line with no quoted arguments. """ commandLine = "cback --verbose stage store purge" result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "--verbose", "stage", "store", "purge", ], result) def testSplitCommandLine_004(self): """ Test for a command line with double-quoted arguments. """ commandLine = 'cback "this is a really long double-quoted argument"' result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "this is a really long double-quoted argument", ], result) def testSplitCommandLine_005(self): """ Test for a command line with single-quoted arguments. """ commandLine = "cback 'this is a really long single-quoted argument'" result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "'this", "is", "a", "really", "long", "single-quoted", "argument'", ], result) ######################### # Test dereferenceLink() ######################### def testDereferenceLink_001(self): """ Test for a path that is a link, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) if platformSupportsLinks(): expected = "file002" else: expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_002(self): """ Test for a path that is a link, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) if platformSupportsLinks(): expected = self.buildPath(["tree10", "file002"]) else: expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_003(self): """ Test for a path that is a file (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_004(self): """ Test for a path that is a file (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_005(self): """ Test for a path that is a directory (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_006(self): """ Test for a path that is a directory (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_007(self): """ Test for a path that does not exist, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_008(self): """ Test for a path that does not exist, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) ################################### # Test parseCommaSeparatedString() ################################### def testParseCommaSeparatedString_001(self): """ Test parseCommaSeparatedString() for a None string. """ actual = parseCommaSeparatedString(None) self.failUnlessEqual(None, actual) def testParseCommaSeparatedString_002(self): """ Test parseCommaSeparatedString() for an empty string. """ actual = parseCommaSeparatedString("") self.failUnlessEqual([], actual) def testParseCommaSeparatedString_003(self): """ Test parseCommaSeparatedString() for a string with one value. """ actual = parseCommaSeparatedString("ken") self.failUnlessEqual(["ken", ], actual) def testParseCommaSeparatedString_004(self): """ Test parseCommaSeparatedString() for a string with multiple values, no spaces. """ actual = parseCommaSeparatedString("a,b,c") self.failUnlessEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_005(self): """ Test parseCommaSeparatedString() for a string with multiple values, with spaces. """ actual = parseCommaSeparatedString("a, b, c") self.failUnlessEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_006(self): """ Test parseCommaSeparatedString() for a string with multiple values, worst-case kind of value. """ actual = parseCommaSeparatedString(" one, two,three, four , five , six, seven,,eight ,") self.failUnlessEqual(["one", "two", "three", "four", "five", "six", "seven", "eight", ], actual) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestUnorderedList, 'test'), unittest.makeSuite(TestAbsolutePathList, 'test'), unittest.makeSuite(TestObjectTypeList, 'test'), unittest.makeSuite(TestRestrictedContentList, 'test'), unittest.makeSuite(TestRegexMatchList, 'test'), unittest.makeSuite(TestRegexList, 'test'), unittest.makeSuite(TestDirectedGraph, 'test'), unittest.makeSuite(TestPathResolverSingleton, 'test'), unittest.makeSuite(TestDiagnostics, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/data/0002775000175000017500000000000012143054372020274 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/testcase/data/cback.conf.70000664000175000017500000000061211412761532022351 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup2-2.22.0/testcase/data/subversion.conf.70000664000175000017500000000233311412761532023507 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly software /opt/public/svn/three bzip2 .*software.* FSFS /opt/public/svn/four incr bzip2 cedar banner .*software.* .*database.* CedarBackup2-2.22.0/testcase/data/subversion.conf.30000664000175000017500000000047711412761532023512 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup2-2.22.0/testcase/data/tree19.tar.gz0000664000175000017500000000165311412761532022540 0ustar pronovicpronovic00000000000000GEj0(}M,=NQ%}:KqG!C/ltnIBUkls8X׿__Ő릩Sh.6aw}W/ incr none /home/jimbo/mail/cedar-backup-users /home/joebob/mail/cedar-backup-users daily gzip /home/frank/mail/cedar-backup-users /home/jimbob/mail bzip2 logomachy-devel /home/billiejoe/mail weekly bzip2 .*SPAM.* /home/billybob/mail debian-devel debian-python .*SPAM.* .*JUNK.* CedarBackup2-2.22.0/testcase/data/capacity.conf.20000664000175000017500000000025411412761532023100 0ustar pronovicpronovic00000000000000 63.2 CedarBackup2-2.22.0/testcase/data/tree8.tar.gz0000664000175000017500000000022411412761532022447 0ustar pronovicpronovic00000000000000wA10 %7nudVnBVbȀ["KR.'הt9%5'YR45J1:Ѡ!8ڮռwosCuۻV p(CedarBackup2-2.22.0/testcase/data/tree1.ini0000664000175000017500000000040711412761532022015 0ustar pronovicpronovic00000000000000; Single-depth directory containing only small files [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 0 maxdirs = 0 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.22.0/testcase/data/cback.conf.220000664000175000017500000000045111412761532022427 0ustar pronovicpronovic00000000000000 machine2 remote /opt/backup/collect CedarBackup2-2.22.0/testcase/data/cback.conf.180000664000175000017500000000060611412761532022436 0ustar pronovicpronovic00000000000000 index example something.whatever example 1 CedarBackup2-2.22.0/testcase/data/mbox.conf.20000664000175000017500000000061511412761532022251 0ustar pronovicpronovic00000000000000 daily gzip /home/joebob/mail/cedar-backup-users /home/billiejoe/mail CedarBackup2-2.22.0/testcase/data/tree11.tar.gz0000664000175000017500000000120111412761532022515 0ustar pronovicpronovic00000000000000:An@`y Ag’[* l@11} +qD3'ϘM\b1*;rXvd\1q{c\#WN*7O{$^? sC7b"ZmQ\#c3{+!S-gwCOĆQ>B 'K{ i./_3埔JxRvߘ ݱ֪Nj1V!.M!;coոmhvb6_>9fOLJp3|Bl5:Y}amUo: ۡ?dCS JF]5Ck%_P9_oy̿0JrN[!@d??٧WWOR] u?ed$8`iós%_3/u=PhOY _og_"mvؿwy/FPwC  b+r[?r1uvS5?mE`PCedarBackup2-2.22.0/testcase/data/cback.conf.190000664000175000017500000000325011412761532022435 0ustar pronovicpronovic00000000000000 dependency sysinfo CedarBackup2.extend.sysinfo executeAction mysql CedarBackup2.extend.mysql executeAction postgresql CedarBackup2.extend.postgresql executeAction one subversion CedarBackup2.extend.subversion executeAction one mbox CedarBackup2.extend.mbox executeAction one one encrypt CedarBackup2.extend.encrypt executeAction a,b,c,d one, two,three, four , five , six, seven,,eight , CedarBackup2-2.22.0/testcase/data/tree13.tar.gz0000664000175000017500000000064611412761532022533 0ustar pronovicpronovic00000000000000JBN0Fy\-7FhIAN#21N30L{6wӴV;%Ҁ󸯌ǴEg (g1QQ2z$>W l4!^Tmg#O_oJm@ZC ]*]_[r(~$It&4MDJof_c4A.otAΉ(aXR굀#NaDѿ v@Bf12>cwÎoElĊna|K8w&F1w.*cA.#گp5&l ]f<Ȃ7 rC00TklvO?'Ӎw3?    ;(CedarBackup2-2.22.0/testcase/data/tree4.ini0000664000175000017500000000042311412761532022016 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files and directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.22.0/testcase/data/subversion.conf.50000664000175000017500000000050311412761532023502 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup2-2.22.0/testcase/data/postgresql.conf.30000664000175000017500000000043211412761532023505 0ustar pronovicpronovic00000000000000 user gzip N database CedarBackup2-2.22.0/testcase/data/tree12.tar.gz0000664000175000017500000022266111412761532022535 0ustar pronovicpronovic00000000000000Bstϳ.;m;m۶hǶmsǶm;9sw֬OWwWWUSUvؘegdcc303ҳ100~30137ߛ>>6_Gӿlcnhkd&$Lgfedh a´N646u? ?7?'%3c|s+c|Ws'3|}G/6NfFƆN[|koYB;;ٺ:~;K8Y;#ǿb-ɕDh)juv7v1vw0ַ20171Nfa-_fj|Golq#sLRN %nno-[>ӿ8~ΎV߇u2v"LDʉ V_N;20wtv0L{oW}{fV  ۹4Iw2aMmh+8[ۺuߓ9X+Fs̖̿FHqk0q"iorW88ZegƏJ؁;pmݾRKcwG|׿AM}C`m`n}o(Ǝv{t5374;]iַrk?;;e / 2-NM_fl(--!-۩VVOVQ;4/\-JP6 >2#++8H팍jOj}gwy6p' k1%4" q|W"SC/&F4ȐXB7ےÿ߆YOdb}Zژ~gt?336VߥEB@t#G2>]X"}}Ajm,rː)kBl'HJ=y*KcSm`6R=iٹ |(|knwu:F$7"MF~fǿD*}E|Q5`-VWZFX ϵZ)"7>}MYPKZT~.f[CU{MZ}0b;V>} Ι&4~<5~W&)v>IXDhi2=K.fGUX0ݪ0Lh#>uޞM׀|9RT1ߝid|~t=I63~&il¼{F9ul_9$}UP\4R;K>^س :F]h_[;ɩ,nviz6ca;'76Ӓ+ˋ<7?9m)<׷<9R|8XV9RAkF{]phm\zhEh9ozV{נKvO ThDnt[3߾[u|plux0Ыhc+Fդ+UfߵO $|KQ`o+1ߕDf" KפBɰHi[`N( ˡjC=HqRb@W'9w)Q%)`~WG;&veX> -xq ѻPU^BM9elaoQ=En$y]uubz"_ qVu>׿h[Ej5\空#?;)h/YƜl ^ &U$/H!NX`dztp` Ld{,BgI/J3k}wZM{T \B-剺{-Qd{ȎfVh397JwT)*k0i{ʺ$LƹiXZLĀB:eAzɹ)\ƥu9 kJs|ORW D1,󻝆s:~#|ё q]ujU9"|7Aܷ?35/bD yj}䢣A1Vh'}ģ7^wG}̠\1\i^~nmm."j01ʚq~~?$2*}(.QLp Zs|| }3Pj66OqM[sBbras51@_Uqʕq@++-J] ̭m,+Rx6G]]Ppċkgj]Oa]y@: ma:`KD6t;Xjt㡺#{/.*=ɐl:J݈>rr:} ^W ,Q+>yAʟjetoSl{_[S5=Yn!"v/uĮ=6 i,{՚{*@+;03"~b>< s54;pոeE3{ E! -(D*'x'6tEtx5GpFH TEl2z+4K;@神bBZ\է祜 oNK1N8*VP1k{~"#/bSV'Tr KUsF0YIn^ufhS4.PES"SW:˼hg./#z5.s>dI5zͬ6 -r^U S)v4Y[ԛ>7<H<+>\D//w_!ew6Bg2Eb#&hVﴱ%ljGSwÀ-rTL܈?F&+گ{ձ5Zm@j"8<é{iըWTwѵYU&("bUTrk #>ֺŠY9q ܟ`Cj >ڈXø43Cgx^ bՏ tUʞ?ГI!R>fXbb IIFz2!=oцF~hb|{7h(F7>vU1|Tv3#W:Z'.sǝo3\jXhB4, n|TEE)t\@¥S &z0#}6}S3vαvYDj/<(62.NJܕ(AwtP~ش15|b&*<$i/D+u+(z$$ub§8RazE^w5Iо97tR6lhY>c.5 w+z#4,cx~;k l<JrzۑCpJٕ_d+Qz˧И۪]; ֡N{ɫ2vm|%n>eЭ%R ʋo+!+@-.O}{ uo>&lwS[,m8A!& F\g'xTոr> . $~bJkiW˜m[I!YoMF;_xF*یH.%͛D OBb*).lB֎R!]2g[nCn B_Z!ȹhzHڊOؓ }"ZC^YXrnQ%gjjF0R^1lǿ|#Οs*R#~_OiibľeXw/X  L4*#>瀹c?Ͽ>_{#S>iIyk)n$ǗЩ'\_`?uy.Nsnd,2ۜ.| OOy V.wbźt~B:,+8RLJ[o?R28J"6}KDF uȔ>r#ᎁ̈SEy? |sV?L!ԗ/i9aMQ׿L|葙 CЀ|s mk[*<:#djw*p+#'?kݴ$~1cSsc\{1C ʪh)L+giTꜴKmXE*[4cТ-0XrԢ/H55(xtG׎J44F5rr ضE)ßv+('B=M,O-Fʄ"pdž>,)+*0Xrm޾;0xj *Bo9 y\yHE`k*'˘RgݐqZ k)+$0{f&p" :R0oa)WQAfu8 ^cG.(gh,$-zAkj\TRłXIJ? ᣕ|4yLJ^BhwRtyq {ߋi6wZߋRun ˯U4uE5q/XWy Jş芲-Ҿ8>K;c `l 0^;, c@0Â)Cnm A}ӣ,X㿛%~2xe$]PBŎqEhn&}?BmtYhHhbzHT܌*Ka _F8/yߣoFtDRԸRz[…; 4n<˺ŀN@h,bu+ #ڎjZkY4o3z7G'. :, .a# H^-(ܦ&@D!^%=齸n,μ jBYDGB12S#/vtDԢ3nrq&Cg XKhDDt_<*1iʂdʑ1AJ`;_0tc_s+g4mp&TѝX /aa93H.vC\̄D,!Q~L*boBg #ɜyp+%$YXK"Lr y,s$!%MG K ];cQbQ>[ԷgK9ril{1me߹yag! cO5rl ;k3AgohnYFF!CޞN q(F͖YX-R=@8Ce$x shF~KQi+B+c(%: Iٝu_SAl׸eͪr)ma2M6kG×v*9CTMX+~@c,(!J0t%nXI=55>mwG"5 :q9 ȋOʒT24Z'_l[O?TiޟU/V.Hrukv{G_٠]-^?1^8#H9Tyag?b*33hBczVSMǫɂ7p˼`||dʈ6fǜ1_)6!?- 'ms__#Z5IquIuuJ31[>p \h0?NeG(JFtkkѨjÆ"`@49aRhւR`6N,!*٫?ktVߎ@@θFsoA NC f[Qlr &Қ 5&<`VUHb 3r{sMCʷ n#QXٗfԴ mXlv[ޯj| +ŲoQL;RWs,#vO'F0e!+B1gE~-aL;-s Y% rm 9Ŧ!3U"zq3h !#ޯZvQoE!;v%vŘaS1g Bq63`Yb5:{f3|Eދx13XlؿY쉉(JBh’w13YxB5eχrɀ%s;YÐcaMPK`@I<'-H:ve܃t9,zIWx PnV1MW.4v?^4aҥ{kzR!xfĐ=,B+$-jƫ,"VY'Z֝ V~47.OҧTyUoa %Ox_P7SD$ Kvӏ/j \`C2?{dDp82L \b{ZK4@A''Ԯ/]i/e@! h ٜBSDǨ|ـP3ۍlA&,/Y/1tή`Eތ/E ]V;@rJG|V 6 ?/5 TS'._vvlP1?Z P7[X/j}"]3zA볪Q5n=h{MƀRHmհ |bX]&="h &`OB55?C'Ͷ6sԁ_@D׃4g ۑ\YZ$@ǡp&40(@:׃=YJ&WhO%x?e$wcb fōvxtE[b`TDmӜjSh~ `с<(OaSkl 8|k[(qM;+^E)2%k bFk9b`\VmAwa7y-BYmFW2ЀS-]kg?:P`#B%=+M;:TrGA|;`۽UxU%9oiV= )Mo?yƻ?r{@?[{C"l"_LTtPnKz#mN8V)Qݎt`lKVbX&$thե |։F` -$HViK>Z0aΈ8-Ҕ.)@=2b)ЕTTyq=tpʈ ɊYآVKҝA`T@k-N #K@`r'3 )8poJ=\~ENgjU4FL+ɣ7_G.E{ %q2( .#Ź%pߢx_@kխ 9PiO݊Y MIόD3fҺ/7grۧ/oZ慲a[` 7%γұyzʼn7)00/;}.ΑM;:|0`8=0R:xwJ0 7qsK뮢+=RD?RjiP}Mހu7 t^)2q|͞ lfr"E T2 rFByyX86 ԝ74V\D_-!^nȺW;XI]drADgD)Q4QX 6qk)vP~d05a`'<> Wm:{偤. *YRM}U\ LPJP !)w5EPa.ݶ92N%1h _rѓ RJUC.;GQ _7O$%\a8 2(ǻ0b/WoF5sZ7ZMq'G=K-@)?F宛kPFlDcift~k1EM~jwGxDp28h`u 0ϒ܋9tB{% )Xwvʎcߊ^~2 vyЁ+Qy2 >W,fB˗Y3[ ׽1..̎׿1 L@{uz|]*'}GmK 8˟NG{9(fVd@~36 RG0m NƠdОӓ,%t]8VR5 rfi"eK2f}':zx:Mk!qWE܈?i'&V'#d-O '^R u%@h4|BPx3JR?c&N@4+8]íBXV7LcyN1}\lmR~R_ ;DXLh'XሎíH+MC[0]|R׾ʣUA7 /^_ Ut+m'W\]ߍ+e }E6ɭʌB/g j?rM_mлAlQ^V&OAoy"Xj6!wcd,$9ÏAU ag5&`2YP:l:՘BD,%DP3LJ 74"_BRf==d䥿 ݲ빆?-5 ԧv:4T_i2]U ޒ[E|<-hxTW(1﮿)l҄ XzB,̫@Gj>X:pE53)j<$ Y N4LT^*3u08aEGXMq:2%OgEtd3@>;fS%K& #Fڗ}ۃXdҪ0U(pRi#{ʇԙ+D1ίy dw`:o!,;=2Kc9-+"b=Kr0Kt%i?^$#Е8`""AqD\˝7!ܳCE[Yc cÃfK#-_p3Ӭ[{-pCXD޺gidx ԍB`'dzaf?OH;#.֘J0}Ң')OBaV)mY2#乴 olUɫ+CcOz[QR&čEZ[R!s2]iP{刺O* uC,Ӏ>LT)Q'(P[ODڄxIB(^7)MYlEeai{FG[V=ase!1b9ԁ0~#0 2A8×~䋱r/1_7/`D0+0FPBB#?fIPómw70nc,&Fn%Kpu8iMpu_k%n5?s$v8-ׇܣӕx_OWØN燀^#aDvUz7_TO>NU3,EN喪KzUخ0ǛXF4N71MP(ﯮ\pu`MR>vgWkP!UWA4MJ+ [xT" fCeMp[lgvIOq-z4dp+ƥ3[ٔ ϵ$츯oyq}u׹zGq iY^tXER[/MD8]0 ;vkU5ޛTCl=gUꠏCVC0=H4i_Ǧ)_.V+5Uܝw]&Q^\J#&җv4\WomEP\v6(nmEA52$PɥAjN[L3lkD,r: KyS~5- #{FF|\³v;dZV%7Tn;/|GSi(O]y^te|/TlWT ɇ ]H81鋹oefʓ <mGcޤj':mMK`1h~]֋de-h/t-zh(H?uW*Vv shzh&`ğ,%AOPyO 9:h˸3ȋi'$^'ᲰX{u2reav(塜3cU|MD{YUfrSN+qdO%cGo`҃s]в"6(/~l%S>lJ趠 OAq`ճ6jcٙ6?Y[7*s@J. ݣ,}OnC ٱmQWSٖ@nOs 7p`~TvWN*qJȠt^(o9kw{\J)S8f+{sG {-# U2չG2!3rce Nl#YM_Q䦖v¦+x\]F%~{ %-zΫuZ]j !kB QQYl2h7Ddښ%‚,s>+dx"@๶\8x务N4˓xE>&_MB)[W>MaoOX z6lS5 U޾ȁ.M'o{OX׬x X#/[hX׼Δ~"MJ-1M`PNzno9ߜPOxe3? @>,`a X@$fmIc@L@R~BhєjP#˗HzÍ.h;d] C ֓ & va @;#a %!0a*Iq(+PMQbU :P=|Oa[ۖծ5h)ȷgǟ<?wsG#!^^ Nj.~BG:`6ٸ|M5tOnkAKӎ2Ѐ ơB!,7sW%g(H΀ kUA'o׫ B@s˜XMj %] h$Qmy`FwMyb]dpwp%WuHBqkX"_`*K5M~!UId VHS#cq 4A Wu PG (&#ou#1IUICPcoe-Ob >$mQOvZ" ɏ^͏O['TTa2RgO_&1 4¢ uLW&?ê淢i(@(n&uϊ#b'[}. h V8?ln>.@2*#Sа#.ĶR}A`ċY 7Ep@0&B1g`kgmłS`Dl[K9He`K{n'a/˪Bm-dY`ՌDiEɍ L ؜F 蜽Ȝ`5 LѭМɢj=KAJκ Sm,.́g',2&lM_byok넯ޠ7;Th+.VFO"Tt[%Sٻ^ܻdmǬ5`uR2/d fDF kKC~a-š8BzhD("m0pς!h2~F +5F@}h3O/ [Ă0*ۯkT@tv5W@gwbCq\lFCq ^>ٜSC38:r)P~Qh"ZI(iߐ;S-`l}C"Cr6~g.fR`kzll{9=}y0+LvtM !Wϔs hJѮ7ob:N NRh̰)JE~Ox7:9yЕT]Z$FjEi뙜88nNe'W1 1y8˞yhG%WsJ#[`f#h$lB%jǮeoo̻ ?p1  9hNE6̰m{6Nt1(9|mo+Hk0  h7bY\'.條$^ UT󂵧?̛9T ~Ȉoak} lKC6 0 H0$--(qv{v\F^veU!m0l\?.nI[蚉|5[3ireEmyR _ :A.og]W!FL9@M;@LM{9!FD6kS{ rZpFߡJY*W:bu!RȄ8S8yV ĸ z@f~r#'57ĉH^ MPÓ ۙGvk9:]M%@0*qzebb4%+(()S֦<4(.O!b4?3ȩT;.26Phz.7 wl$o*^B\|{ʁ e#~Kz G pWf-WA0@ʢiq7Ʌx&K{};c낰 yhPuKwM;.\646 ϾɾK15vM@ 2KsǠaL5Xk?p]?4^[P(Y+@{R&,|]Gv+)XiZߴ0PNՋ_jr_BᯇFGI;~?{[nZ舥#Pq!orzknN!׻ݸ+@A@(PAqcC >jGn5 Jzoܹq3Fj0 -Y[oh aYLK7f\ i|}:ГD*p olĢ G;㆟ &'{!VgA)W{0sѝUڔoIAg!IC:RX` M6!P?f)~OdƵ(`:>ۋEǹAJR< 4I08Zf,k;״|O# I4sÉ3{QT9"7@ƶ?]+˙"z[ V ~ w֑Q A!^DAΦu_ ȟs}_  XslR:L:Tb:(#A ikc'V:ĮYs|ccdY `c 9 \䲋 ?mr1H[CoMI'e ~^P!m5"#o#fS:;N\[gP1J/0lTMAδrv]*聢U:lfbO dd+uY쬚j/Hdl#ga+H}c4MݏSz?fXBxe|=e7\$ CTRQJ fwR: Wfwsq1]lȇsK7#$2m %'?oXLCsG\ z7 _ayϲoA3a53w)(2{6aO ͕C lֶl!'MOCq8dP :j151oߣ\TOLTMIǘ/ކ{ njZyN- `ܶ6*xwl册,E7}!`f6l toMt|~/wEO`=<2Co{L~]]¿}`.m#C:Sh72z xOռ=XU7\x= 9DBc9;P7\ \ h|Ld=<5 : EoE^{ƿq`q`se[NoQ)gXuǷΡqFvJuNVv.*SĹ:Q߯Y^r/rCnɷeoVbr/K 7M=d V{'x _ Iȝ*o ! (c~F=co[ `%@ džKM HT^4;iRh?*hí 2z# cL+jߡ⯜s5Ŧ>Bf~E[J@j݅9'ͿPJ*o)Jz7)BmqrV`_A}rD()(AHb 7z}g hh !ʅ=o$s ~a Hʞʼn^UJr>Mq)X砓ɕ)?R"RƏ-e}k' rճ41 ch"(FצFPq{]ypb_4ȿdtYRfK'"լ4ȥ|27Y+*/~ LM -GQT96d(f.jEC_jr))#A46X>8W_Ẃ l̟D/yho4ٌb<%X:dP*n3:S5nqn*#]Nh-]r'^# Z$WG32"TuD |$$&ClhÅ)u o'( | To'(QQ1Arsְ$ |Le1z +:qc¤Kbʲ&Dmf35Aw8ykn"yi掲5EP߀1TsBEhiGo83 -j/i Fe̐ ^,@LMDdjb]fy/IYt.SyxyU8=wI:LL<`лd؞BZA\4p]Et&> p)y9ܹK{-o7UPR]_K4ۍ>@$qz8Ê>\1 _?bL>HT1| $/̶%}c>|bPr(S>C*IJCl8E'9\=V2]ɴ(2eܿ2rw&A2+&^}$RtS"F1^@ʚ}%92!?F"!r籆mA  X#hQT؎ pG^4K̹ ^|*Ŕa'9 fW\f[0zEopH^1(r4&P@'oZ bC_{ p=c\%f` 2^+-&b6G& Jm@EO?~8[_ 㓙5DFWqp c^ߛPbIt70QL q]ɳz|5,6Z&ct@덆}vωRP ÁJ ^~sZ#%[O,,-2@5:tJs;Hr";49uήZܞϣ\ܬțˣxwJsi#;ꎙn;cU93OK4YfQ.sx?f!Ot)DdQecWOّz8x~+Ņ UV1BW:_+qƝ_/ 7TS_6)ұ>|]U ȣ?9 Ah;WYL*M>9 Fr+3cIǬV UpnWN#qHrd2gLo(/˺ 'K0T}=v!fDR_JH:/Z' }VNHNlX?h~;n>ʠ7gR.QI~=z-o?wT|9 B*,SIR!g+䪥tjD6y3lv4#tuvhQG^~$?Dێ@dc^+IV<2H37yRPm/ET8"0« 1jERc.r)] eي݄&R8$2tI`)h鴪KŒ߾'9 {R?'1=2v)-\!O0Ic >}447d\D)-LgJpv۫;%Io>XfFQieVr֏aG< 6׃ dmS\0QF&zǽ{J斓)F{S2sʣi<髕(Ӓ{tޘ<I#9 '5H{|т)9T~|x DSYocFMhx;bϝ~qt-Թ$82խ!ݴ=0''dest!5,AăorCuEQأ z5'FSGs;Uy@qbJlmr!O.H ;ldH?Hm,SDrri`W<-+,GClȔ̩͝E* y<9n94$2Dvj"r$ϭ;ZV.j GIrKtbyEMr?Y"tvl7  .{oQB9-dM.#'XL^",>S#aXT!;G>Cȉӎ~0\(ê u*ؘ "~"?8,pӦ-F93:0PnByfg2-F +rVc^ԅ|N&5JM F  ܑgnh}Fdo~Պp6ad=:1ƥ>B8_ gL-~Oe {z<&aS_M| gX]Ŀ=N=œ21cŏ},CMqoMR;o9=G׿oLa_xѤqNIGĨxkeNɪ-LvʌBY e)zZL'|*D3ʅQBx$zK"k?Q u9Eqșc0 _ (,+'7P n?gXz0#PJ@jlMOwP@  ,7h4 ݲ p1 D 8߿  Vz8b#š@[0e-bB )K& LC0+'! cBP+jdE)‚D0M2#T_ t9z{%8 A;iQ+Xso:i? OJ%t6vmtM'b*n%HE${FAx!ڇYr=3b~kjg-*=..]=T/> Q4`[p*ҶLMۢL"5mv$S+S<>|0ӹ$q ԍ&hפDT N,KW,ZSI q-&t+p&T4E#K_F0=XPҒB˹*Cڿu+G)ӜZHrWI0sxTM\Z ( 87{,av /PmyiՀ*JWԑJ'{7cᄣ:!s@%lU;c1fb1\0֙jPY%Z)x^:|a?ԑ1:_&5QQo6>^m1sS u^t\sxD[m(w+bh%.EɸΝ ofwtr~_wdEp-?fU_!{ۼ FʯKޘ~M: ',Y&… 9Wr.V8U@VdEpS.DT7d{l\0c*50BS]BEbN/-|rZ$6v%чm۶] FZ+c漕PpM:Tr&ٓs H 83́K:8X颹c*d%7+LLMv-eU%9L|RJ7uN[YA\ɲ)v916s@4 K.GWW Q%,!s|RVL)]*kebt h{ pl'CQ9/RUMIPg*{\9 vY܈cO,sؔ"E%djjb֥db|ޖ'EDuս5ݛtG9|j\@m\*{sY3}xt)iޯ>S=|}{Yd >0'z[lӋf0t@[dA71B2I^q|Ass-Dtw'2)8ʱs秳Ŵkas+MX%2ڵ&tLnyGOލ-Nl,fT(+'!q\jVh<. _:5*oqtway+ioZ?sp˾ο86|Q /6laݥWgluX},p>t`&ٴeSZ(8֟g`rmŔދ>j_oH:bv85;OHMi/N4K&39C[$epE Oȏ `r47OV5!^J>Q)e+p!ԼΥE5^כ+E㯕㿅LPJ.r%,o0&I*3۔&b^P92y$ӪF/4=&/,o]d_NxS*6NH^vIюγx{JM eoi'W%$!m3x* p [_Kv13}eSg?s\&UXsdVy<=*,3 Ow!PA)G^R~(ۧ)W~D'QkO\2<4{,G>ꩌ9XNE䃃Z(?/ct3|2~Ը1|4R2?CJ+oڃ^ps7{z/7Usv5{SV(e7B*%B'K#pdR0=. ˞z%ُb d3/w7~ߑB=NN|_R6(N3G8NgtnjӇYȠEo,R=E 2U EĆĩliP&Jo:nCN'͒A龜-9DBY`3jy40sȘ6A)y@QE\xHuQmb웼/2^](;mP\j"nA> *Ei{tZZ 8ųA> oS<f!lrJ"f1Jύծi{Υ6p]d&jw8gPgA4&o^׀Xe*9tlG,F}ps6t w4PMB(fz ͇v:#Q `$Dȷ䐈ig}VN(%c6ycE6ݯ}6ǿe&z(Շg5;gx&w?NKre;JlZې;D>﵇O@J!|x!{GXf}6S{AhY^J:vhn/RVUݒ#+?"]͹.5]@71—⬨/BE;(>m*(W\_U/u=>sIh=^% x*bn'8 lytc;: }}_S|*4Ɏff)'ȫkǼF 6 _ZY my˅ȥ]}uU]LvP1&?d A0]ZN=T= g\roK;^8 =DB6c&7#)W$4Пk6* =فFRn#d+1$f%2fyn{Z|V[2WBP9bP pFFMX0l+MbOH!﩯LFaȼkhlxz4"3Wc<#a=ac8,k¸hx9Af*}BM'_ΐ-պ{|āDx9qYH;qr"N5#f 8]":S8?Fms9ױm[ ^) 9u;LK>  T33LhM`C,M:1$t)Qn-)bF)[#*ӄ&L byC 1I=nDhqT>iiRmқ榀#)G^5MZɮG(Xlsǫ4 6lj*[4<У([,Z8D58FVŚ},mH.ӀoTObHцuQwTШ<A,FA~1֨¾.aFVh޴5.LˌY;w#G[1ߙ,jp2z̪NYJ_3A-'eʞ"(_Jbɓ4?-jX$=oipA z<2h?@2əYGI-HQ %(2tuP_5ONЏRz|ҟŲNטKL<=J%k`&?`Dz Rj^_.V[iʄVrE!h>g v*l# '.k)u 9gWXWG<jpeq$LS=(I#x:-˦Yg9??$E"X^6'パUx &" Gth-{lP  7}x#{s} }FqX<ȻFM!wzwU\!1e|YeDAۮ+`93hKfQ:{]MQ%4LpdDžٸ6r0EB]mԖVSG~t57'2j>"UQ.3x= iq]!hqw9ײ: Z:Mϣ12 nlD6iyRkkb D8'1خ$nE橼3Ȩ+TÙv^Kjd;`ۦtm;ꓶAAA_m OgMs /,!B}̉h o4Mj?c(h]({ٓ PECC[ۘWs0aڥ^;ިyes#$LбKUQ7m~䍄W3sHbj;JF>8UNT$5dɗRd?Zf,Z+kN~7;$!֥pҴT-V+iJ=g7!ۧ! 8өz  DAƈrQw)V>q{4;Wي%Q W9O8P4֮e9c$QBsaC- I4m=ß2011n pHCTi+uGMCBiW}`,Ћ|pB,a \b~NOTH#WXd]L80'! ]3Y>*xKx!*o(ϡ'PiS=+UZQl4gWӒ@Sl$G ϐ9h]鱀eO@1_XYayٚ3s]&,h5S,1{yo $;dWsnQ$Vا/>1d-8v凊,M 猈܇n&}0MA9sb!$A8Q!QK4Cu?4\6!&uΡWA4!lm9\"Y k^Oфvmi=5W7[]Q2T"2Ⱥjw=:6E6!1t.~QsVكmQ@;ӺK.!@a`_q{T˟_vX%%!':<5>`K,U4ddd`J7JьYJ/Ofc+;mm/ zpC9$hI] eBh+Ό('ˠ ҕXVCbn#$&7ZޱnšdWb}@ :4bZ$ !Z eI܍ *Mg P?_ { } gdCO=YNaw`ݪ`R힄t X,,Ȕ5±sc/sn%w IXq/w"9b8AӧNOGZndQD_{$.7Y {ߎ (ﮇ2IO-ͪE@/"%ɯ7pϿ u6: LNfZYa. < ~r}90ýu x@D_"B ϻ = h% tagDq\}/ˆ)]/n(|QZp? Uz:, Qǘ1$-xO$p.*sGD{ ꗿR;=l єi&Ef33Q+S<Omk{V2HDgjSRQ";yH+eN7dQF5D qTBZfxۚUf V{(yy'cPN"]2o!tǦIW sNdE}2@s!ȸ!O.Hc^%nůn*N=YL9kpA {ĥ3eqYvvQ_68)Y4.5y.гn(qe;RPx#fϗ--[YXD ;eVi8L>N"M= (za!4p2Zt%ȫvס:`ABCtf$Ba рSF.70wS'qócK3[bbY#?-V q0 ZDOQf9*GCU$z+⬙bxpGg]iL :#Ud4ΟBH*xLǁ_lA$@İeU56%B=pu]V/[6 kѥiXMɧnCഇ4^0c.Dcnnȡ W'G}fO0k7np|\xW]kUᄀKykԞDϾ\*ήΛD\`ԓ%9ћMCd^.bۿu2M (핗g\̓.%=l݂y`ki49޳"Ϭrfrun`뢉|ʴٿd Ǒ7i5XV B(_ 2hAu7 }'- <\Yi>a݆ 2M]D.x-gU:Qg rHu/"og=nmL?qtAY:\۫d0U3^0o4VRί-v7O|$7?iamW[W.68 SN@QJGNE񠎣cs#~͂;1NA[r]=c>/m E}HXMZ ـm&$*5o“"БZeB`8v"U7S6"u p~i#@ܲJVzl)P\0I`"$0Po鸘W&56]e3>|Z+ߜ`; snaA?G4BzŽx?eg 6z_Y_붶QtDaBmhBbJ#MeNf\mȀ4 ,g¸sM-rE3ʊ&ޔ໛&– <]8=O$.)r8{6ph{6ܢ "BRJ|IثQ&}쌏 Ȓi[h,(8ͻUwp;h~̥ Imh{tWACp΅p6.P╦KS3y]LhqnBF612^<H-wҪUNAHoóeGct\6ݔM I70Q QPH71qF>S ]0O7.g a)TMP8] =P !3.)1q7V&H^f4hSdrfr>`$E'NyO_`ΝہszB RؒRc93pU/@.ׂo=F`5iڏ>;_t%LpXA,Tfl|颜_I,UwH-39pa,5e(O`\֨-׫p=kFɾ:QZtՌjU ̅ԻNqʩ帚k^) 8u&X`|A>m2Fg;R qh;Sya,;DAv q5晧R-nd xǫN͖āIP4oۈЬe8 Aϐc[٫K =Ȗ9gYV HDDO!MHH &"R)o)I&JfC[0 BP_`4ð{HHq}7zI}7xIÀ)QؘPJf]䲳2k=Zv"ev CO@6LKe{@;|*~,%|@jXx6X&Z1ۇ| ^HP{Zd3T+z3(ioAh "8B@7BHgdSwG8Dь|JzaC +rA ,DV'K|PZ-D9&J"piTnDj֯jn{5AZbM&z^ hG`:{x,E$A1\MƼ\=+;JTk7<{4{ 7;=Dj!^Z )@skt| t \䷖gGxWwWu3!u@Ͷ ;L֩w;+38]LX79U/v濁<+M%&a^,ǙZa. :¸zSN,5P' _*2Pl5^_9@RM p[G`dV $zOʂ_Pabm9VkHTy@tkk,D8k1waM|6E@MS?tě[ۑ-s?o.<[$€ې~7oIie%6, xxEi6 x9wɳhu7m9^mk=@hsLW;T]lg{YXo7[|cP J<8g-s|7)ͽy5MޅuN|/uB]oX¬Q mv$Kk 5s-o}Ot,vt^U8oԶն Jw 4!)l2J 0~rma6xlW2WI׉h+O0t{N`.r>N,BL7w"":GdeiLSNyX"fCx哈E1&Ƶ0CjtΦ<_,*l;'tO:3p"毿e}]N.axt7!w9ή:XB1g3AB* 8[0,0`࣪or@7 O^FMyCa?ga B'w 1-"3MuY=qM&4pFH8k3CN&?IEdMs p8 Pҥhޱǖֽ+:Bޝd '9pOG⯒m_L2q I N"I |nG>x 8 Zղ Mj\D:>|l@STU6e\,' tF:,ZL7Jڎ7 ;Uoo2 LWބ!j$9Occ)OtqVhh1't#e8DT5w6A5 2LxEr +g5b]mg dPND-I-CZ$8nɿCT7[>dnݝ3mB}R♪PeJ>رd ݣKcBWntmWH[:(\ϝZ=hFg>o__;d*>o^-[ßW*>IaǷH=AA<-~wШ}a+0,bJy20nɦR32ʦ}E@&s2a 2VHcBjɌ My:`n N2և:i #WQYWV=pWrEm]kf-gFZE,K]ёK\-d8$RtZPdZDݝU{b?N괮`(6kӹ\a"[sgF)h _#ڜ'y3cz_fAXnm3 (1Ps-Zs7u!x-(ƃ`qflsoѵRV \~XIl҃>paCc{*6;e%29aKwv?lv eyT U8h4OE vEZJq EMJ&{35_'gneM8˦,X <0pqjtCWq:YB U< ]-y\F%z2Q5{ͨc?8H"^N2nSS \NYXru۝(֪X\܊gǷ Q7KwAVE݁j7Kw&Z$ЩP6:x ֊{D;7E0df?5ȥ[NaR3"}ev,mwQ-j1i'(rVdKh N[c-ݥ9p¯9Mj;s:`xW(+mJwt\9{P?*(-0=X1uQ`,E'S}npNi+Dَ)-npl`>ݧр+-xlaD,ǒݤSHiUzO W6QׯnRr$)ry4.uҶU8`R^N"<HAc] aIN<9 nUv>Ӌs#[[Ipm`:: X(] M&w$,h8CS>٣$fEA=AIP^(3 kn[&9uv$0=O߈rЎq)<ɻ;Ii6fONpDŽUb,/;CxHy,Yjj@@`)3y}kk"-iRM[̼Oヺ=KR7AAso Ȅy?RɃ*;-F3-tOUhetn*Ocu 5,ODn 4}1.m 1f riKJmEsp-KU׶ZWΰrc,Z4ZEvGyjN$uS-)VLz[ UR&DЭ3"#1f ~7n3M1%C.'(\9@^.[1hq6M&:Ŧjבo nߘ$hHwtS#ݛ-+o1wSHo߽qEƢ%__hйm LU)+?E03tyB Y֚,0)'ѧMrD12tfگl,vO.7&nڮ,)O]dibX)EYCD~-1n`ʧj [=?K;EqR Wc35ÈvDٳ-+ඇ.y@mMm2E/HHĞh_û95YP&0w[Eٙm#cͱnGrz4npI96d*vFKU!EE!u?^dG=s;CK{L؎{s4@6=aYOD!~#Q>(Z4`OljWdO M gWnJBq%+,i"gx_3{p8?_}DC 0.nr~oǵ?bfs9Sqto;p~\rL[mCt9lGUV $Bús96q8c\-Dޕ?.8oϭ"ZA,rھq-N ΤշZZV2Kc,MMζȴjⵎ?rUG0M}dyNo[oxǒY4J~۵ΦƔPU&wٝfUDpy'-}vtV(wlVA)9m5?tbڨ/^Sእ-An$m#i&d-sNm;$FttJ B+"!Kgzy ;CfpOQ v11Dc$7l?9} SOU'䃡?Ѻ5' ֜b(c5'&6d@dl=Ӭ_ݏhqyRd PnߏkqL:.0j9\siz[r 0?D_u,5d$HrCV@@.d2׊&rPLg'K̩EXZ$QfZ R2ӶY1oFn4,!1&M%o!G!_{n^Ũ VUx$^:\SR@+beRV>@^b3iX3&$ MI e>x.fQ[s5}йssE+ӷjOOWCw^O%aN~L}.4tv" ج$<@ϛg.ѕ%y/p~Sx w>˞ LsORIy>~+n* )D~<*)Xx2#ZOT'ZVtv:bН"ZAr,(jsOP8&7qȇ:{RբtτW*7&àB`꼔 rbh؇ 닑QZ]ԻaAO篶&e^G? #̢axi ,u Uɾ {Z4?zT DTy ->y H=,EE$jul%e& Jăryjʏ٤5ﷻ[T)\I($|ݾ%oE[ΗKV{>h˕7]_c>*^ݷ`S~akC_/ .K3vN2 \s_RNb1ᅘ*@RM쪖{&*|Jñ{Ü~N}mxZ(v}OΌ-_+I%CKKO:q-TC jO_%uۧBǁP6䗽kY.EJUeE嚋zhRgQa:Mx~FWE4  5B'VU{2㿺-y!~g93~Xl.g9gD{lȃwDgѷK̜˒Ns/YzS/%yJ A5|(r2G?H E;dž",w{"M5 A(iyfH&@CXPJ;w*.k |'mj8q74 wϟ8G(?@ QhL 0l 45țL%ݧY+Ļ kT0/&Qz#斶 F"(#䆶 !MwB;ڼ gԿzHCq"BozMMI7t0 O$e_e:w6|'~cE`Y=!kS6viܨCaɒ4zi>;ٽF}/ǟL?ŝ+f]\E}E+c%cC6_}SguCIv$0-+oMMM =D 6|Y ϲb& m`Sn24i`SSԺK%lj]vl颦u},Xٍ'rW^]oGAL/:1D~cD@G[p !BئJun7&^s24n_ؒ4ǿE\tHrn`čiL|K˖r +sRLc ,i˅#Nϖ1z%T&cCbzxHp;' Y-f٪W!ߤ9U.]BT>_X:E5lAYC3lRD[&rZ̵QIV_+DVy-{a}V?eө#¸.D@i3fiGLXgZF} ;zZ]gďwǞ43+G8sdu[MWka$х <ب5KPR]gb b"2B!brv qB⟈b23Ab7v1bB1) wiHyGA$փ!3w2Ah9]*mmvw|k\ڮ?i5?J7>B̯^D(3;mKS/-O~7GV*=ptUv7 X!hXxSw#|̔( #,(?c_k>9M7H*7>Rjz_,$Y\q E 4:5WxC\_ LD)5i=\! "d[ŸL >mD>dLX0M;!q_2rtdѦV\W46bТ#UD nnvdta1\q {;ю-gAѾ@'Zt'[Y.kq3\;2A8d`XmLl%<[I- 6v60տY/`e~; vj&F/հQ׊ʵEg0ۭFBhkc#H`;a66JYR7(Axg 8'?M]qv>41xYd'~>Pk!pAh0T[Np5{Vb*]J(^GXل`X_ΕYC({mM%I&.nR/ g#ǤZYm T؋-+-[۶.\ߴm27rWvTg _"nυȶHhgdpPL1!(✔ [dWH828_[M9g Hw4-jd6|pMW|2b%R-qg7? ?D; IKB%-2l<"?MspP ?x;gyz!mLSMͶDՊ\vl*e/?܋$Y䨮^-`uZ !G76jr1btk6& &XT,\r {5 L wv]tWU*KU#WA `Л)T\'Κ\yY>T%zgW~$  +{cqw\ND˷OtGpK`' ƺ$p2G{geC|Lkhv,eݏZ1; 1eB&@2ENF _N9JjQvtNhPe͆ݼr-vbD(mиsx5uxwx:;mGEbO5\ަZަmn=aD/uɸHKIxMZav|@ɴĪxTCʵI~9OUPnV\r$b ޫ ,.,\Btݡ ͩ5wo﫶G8خ֕wa|Tib31Itu~wύSr`6? 58i@d;Ɂ~LSIFL+P[DfР˫0˧\M@[JCÌo.cjP%@߀C%BYBɸyCa[9Fفh<.Lm{|ԟBDڿ&$2\JQBab\\L eDUateuSz:ي1c!L[9̑yVn9U> .ie,S"3oNj)|tzRY'S&Y?nBƇTtkL'T۴TkZf$w10+U#kQh*Taəۊ q'C3GmzCy$QN X}D.+R134"JUyMCP\z/5|Ή7&nDu)F9h3P\gGwfiED71^JG *аҌi񰿕s8HR3s^HWB@_nkIwmx^ B^QkmG;{՝F@D5w|);J;K!Yp jcsV-2Nst$_\Ќy|չw6]q3u$lghcuɲ>opFX㆑Ai7W%,-̳}q J#{~pY݂z/УFFQI?jWWlI3Q>DaFB "n,nF] o/t#bXF@dKL'N $'6یَ+2#3wٺ/1[{ :UkFJjK'己Gا ]W-k|*z0[Y 6Y s+cٟpJdɢ!Zg Zu'JcZfWnTw,) `7j{WN}u66M&W6^mD\:Κk/:gLln'<%"`1N\]*7RpjoR񰀖{j#;ꃵQݓuhewO3cJo0,c|_iϘ;ɦ׌;Wk}&5gQYL# s7΍ޠ\ bJo-N%bo\dpHc@F1A I1qKFsm0J +$#޺}HEwh=º2#^S]!IY~ ?M-'b;C ͕DI]@dv !tɻ*S m_(͡/dgٓp H&_`kʷh_dZ(ϏS:qW#o;|s=b&?#Bt*3"g?#||6>ZI'<"z$%w 刡uZHn8i> SBF3azxL|d I!Dܥ{w;oG`兓#pV|஼Sp 2b7C!&qv_㲤蓓!#CSd鏊觵@mGNj?O|VWYOo;/Ĕ{-ЩAw/ ilAo;N8b; W;1ҟwnN$q_Dz l}QZ_γRs'a^?ab=rs7>+: <6a},?ǑrEOZj+-n{}<O"Ro{> /?aq6m%:ݏ#brba3|;NL/YȉGOYHGUސ$ ԑ"uAeŧ1&GbXxU曎 d?~ C0)8d >_QJ.֞Q%z"NA_sp35M~'xJ/6dUpgȎ a#O4J,ܷڍM{֞- >ՠR`aL#T+2G*_ s!CA-wX,@<ְaTHW#덽]Mbb Y 5Q.C83% E+_s7EqONvT*CE:f6 hOȃUyº4B,]@RW/˯:0iʦeMzWX&%PnZ'zO2%Ӵ3E 7E_ϊ:󮉷3`N1g>mtD_*zеV&7a,@PI|-p8g qtsn>vZAA5DH[FrWySQ׃uyoÀA3W"K>cפ5Otb~:N,"8#40ugf5/ayC\paic_Z Bpޘpi -bX!廟 |wCkC'ySFK*ɲ˪s6lSf]0lN_ե}伥 ?0uACCpP'RsR|K; ZSq9b7u "ڑ5תSsl9"ϑ9+rY f>X:keg)me 0zѼdI@j6sDu1 a jdɊ> ⯋&[ [S%mPLoŚ5]iUPG-pf+&m@E(p]EIGfpU2Cm`PY<5YEbDl7 m|]%-~PWdƸ7x7z@KCV#jUUI+4ȋu,G)&7*|IjH6Li+i)&iY1N ZE;S8?>a>+CaP3ol'|f |-mzh2A͗֡}&sqr &0QU\Нe홃hzt "AaRFwCۋ+]@cb:".@L+ d3OB8^Jos=H7>Zݦ1Z-wXyFU)*[z"^hQ6=`Bd7'krjԝt7]F"A7&yK8zi`͋*x.^}#ǗFܝk3vya-?Uڠ~BhPm ioV!8op'.\CiO M ɜYtִ7qu޴NrJ{Gĺ*@]) n4ٞ39Qc;f * @ j>LgDQĆ6ZU(7jgΣVTğRr3qX9nX=UԘ8%\js&bK}tɱٺ63itKOSp\3Ft\noI]:xHZ'n5s؞Ll𢲞N$#=cc皴;ݦۋ^j>s╚q1lm*A0E۠2ӥtt߫뭼"'j H]*60,۞)ߞfKl'#.Yb& _yԫ^x& zW3}"?CjYGYTmu%~tw\~Gxv n_o]Gц S|9$qE_O@> ;tXDQn|P+eINYagg+rBO;^BWP/&yj#'Qps;=.ޤa_cj\gmHAsP0^ϹMB .XVmp)]^ieRW'is݅d'ҙ¾ҝP7j\s^?aaߋ7)UN:m|3B#}){G-tuGRZ@L ԙTk CdYK,fV)}WԷTm4ɔ}[_(%umwv\^= |7>i::ua^`? npq\4ʼ{Y9bof=æwkmPlwaq?l%Sx~!+.n[q/]kc׷.&b-kg~`dXmz$ʮv}a?`>ܰ&uy~(5)jM춚٠<1ZX,%ŒY_!Vݑ0ћ|g0ٚν[ď EG 7?כwy')+Yqdt8M/^u8x )OJ {:Q}*4l8@2^2LGRV^GGEE)-2tؗ(%CVޥʐ;e/;sN2xg4H|EpwZtU-y0e T%A բtwlq:vIҩJ[+b-ӨʶnQeT):SfrYUhR@%=\7^W+!&6N:M\㽢Lμ6~I{]caO|2^v h3V3JQO N97+DwDGֱʙJx.Rawm^?t\֑O{:VGA綠:b*@$wBV sOH/+ 3:Jfl'gp!l&(&0NwU!#Rk؃y-LFZIdsY?HuW_Q=_om $X0 7]xsEy:4ݍȎg$ce/܁8G÷AͦY؅n3|lc@;-d0x.YForNwR*9 LT+7 D]޽/-.\O=.Vu>'*p1 Ο4_)fR4ZE?c/Ps)f|8Jhvld \N/a 62Yoh/3]q@Oq.;=ϻ [ڇѶtj@pIp3Zk~'4ZՒDCvIj#XrA^4*!k-XPZ4~ lfrvlTF[v4>m>v﷋gxlhS6!O2]rCti?cw CI틹>œ2u"? +~ߛ'qS ;Z1ϻ=L3*.L3GB1L*SK O'1R˗v0,D,b .B(XާKj Y >G ^O嶛e+%#J#hIPullQ\`遜^CC3ZVD Dׯce{~g;TH3@>>䅺ِ)yP/&֒'=(6'ޠ-َ"V[A=?e!{Hז\/OƓy%c@Z;$@SpWB|o$oo@~G+n51 ob)okG#,6p"f _%&wO#slvkq'WQ]^OX:hi*Pru8/OV !ޞ{\\Mk684OT+X[\=^7]]6=TrGXZoĊ.0tKWCGwUtiFU}=df ?ںXU~[ΰ|`҆fJ8 ME,WQr嚛9=YRvji ;ɢY[6H㯤lV7k~@?#v@zjv6)F ܛ,)3 ς+2o3:5 vox`cB*R)Tc"uCVfWo#N'xX4B nxZ;MZJ Nѻ.^ѫ/tC@ /i{],GAU2FMdd"SȵOa`Xg*c+:_/'oXoyXX*čUO-H&DN܎@3%h)S_Pn8K@4pDUInx36p_Nhh'!]kdAd~8kc<$0:YeٻP:*~!#\AȀAKwKY:XT `M$\B&lYO}s ruQ*]ptZl5'#ÓgBqw;{)8p=b}npqhl/Oafച5m|uɊәЫ,˟Ů9]w]Ư/UTe+FTf?vw|rlw6(kk2 #sτޯwi Pu3"`Ix9|"&{lxIĢI #ޒdo>%m$Yn='[.@XUn)kOw݄LClp݆'`8~'* T׋e&2/2TqN{@5O51"h=,f$`3iLw ~7&:*ʀs9`~D-DͫW)-cb(ed~Ёv%=<X:dFz?4H%WH#pCC(ݥ\IXC@ Q-*\]+z@vFn{8 Eй.W|OsH6 =V:bzyKns|iYH |w#χ|\'! wV_ =U>B0DB'|?r^Y 7ݷzcYsp뼦Nί~ݥW u\_t/nm; pnΙ[[9Hw4VՠZSRgi0):Dy)-vzuHHa}%#$bSk~]@0./@nҁi+;{%ÑDǖĤD9΁ EEfV@pG<="%C 0z{(f ~a$3L*EQT}ݣ@.EW~[q? &cX f0Pb(^;J0Wd)Lve+ S~^(fWs4S9zLY2AQpNN2w.WQ WтMB䓨i'}飬j.n/]ߑLMD6!ûV5pn9ped)qCSQWDz7N`fllwA2 ]ÀG`eC>IR )e׉`rb.KyWʹ3h~DZT !ٖ 5FjGZ1$:YE7;\8@ 榍rǤ> B7fH ] EhfKCoaxDX1nl i]$oZ1"~4+xD. _jǯ*_C׿*ԡб AOȠ{v:2nۢ3AS_[roȱH17"IuC(pt7| qO.{t=!J(kߨC2 $ʁ#t;'e49Y=pJDV Tv|`O7Vs[^q-grCLGY^e3uYE~ofXz,s(eP%md_2Wc|J:I+(eodQVH$Cͱ.z/+>aG0 i^8O.2zRvN)dwgg7Uqjb0q)"p D+Ɇu+xĎơl{e&Nu5:"/qFIKgsT1QQkuvc{ˉSKvcH=q9lp.d;0T(N'Vc7s7m;,`) &db-D`3&O; 7~}DIC0mmO86uP;=,0 Bݝ7+l􀃞{g]0u}jm|-B1a%4k* Դ̩Hjq8W2'N OQe9 F=O $o$(PiV^FlK7ScO5 !;AB0-B>qoX#0-uv"DSCv+]dnhYHs/g!)9 *% `ghH@!# S{6L4ƮKj}&oILN}X_QtK9l쬭MgrB`Xn=@5-"D=#n5QRp*pDfS[ğw:n׫"Ϲ760ʻ)eʗuIO-\*KikՍ 6`=M-8G釋:rMDNIWOIEj/0uV]MDW֮*L"$~5QK׻ؗ^2"0wK͙kiZN6UX~\\N3X':RAx@V-iiX8s~(ttsX*1ӲhUzte2x5Ne~J{45b<%ˎ2CCxǤIfLڧNy#W8)hkA7>ÿ'm@ <(3@_ة6 vy7[-p.C0 bF#=RHGR'YƝZnBX[wvkG\B M#4KʆRQ@X+{xUR8O gyU@ Gt^oWY_m.okywbꀞg`Q֙d-XuBmmȖ? p8(ScBcS|֬E/ u&14OtHW?b!16A~" 5XTgc+|H]Cq .蘊|ס|ؐc+]3GΔSPE ¡tIUa`Lrm0b꾯=4UiMcVi C1s#rn5@ '*˞2} 킥yJ&S?IuZ۰YL(oߏX?1#'!/%({5*ttx)vǼ#"FؠM?!Zٳ2| qsRcUgc|qN?S0sxx RK`.hȃ=4,g 7.e~B4µ8bn8{D.D jSD}Ζ5"ߘJHŗخ+-VLBTRưӢ܃` .ߔOC;Qﶦg'pb>lq$2?O+(َE {)xB*VYRQ\'θZcrWgyiol/OVIMc؏C;<CbR9|X'|XsSv TR`9+.0$cqns+)Ǹ]!|yqP`T p Lbi@~1Av!D?R4XfV3]Q.7IrRHr.\Ii?$ T| "S; , ?/+ϭO7L(j50]߆o4.(GFI;67ٿȨR%|) N|7=G+R!r v/fPc X\g|20Y_f. Dଳu&ӝMNeuU l \EmYZh,Ϯ 񻨃\qb>PG*QyV7<!>j#2C'!d?%!J@gh ܨs+xtw`Ckƚ~FyF2~ܖ1y|E>A^`] Mm2iC)~^pd .^Qe))=bە/A"'wDr=MI*^\X;"L0 0DAJycpdP5 ^xW ,34 cО l-cKT$Rm:tl-tb@O1q6O>q6O9qW5y:?ӜNYSRBi| JB/Ye'^(5}N9Sӻ1ҒڰcﲙGSh)q۩%-i|Vϛ%4d^r_zMagnK+su~nH+]įOc`ћd!yO1wRڏO:_3pSLMb>s!G )pGV yVcE8Y.o6Ɏa%Y.{QZXn^WK'f&|BhUi[HG,tl(L[Y7-IXZzuQI$Ml"{ÚZBD$`T=m~Ш$=ݨ嗉R;LN;7| ™ G+܁ ?텪:#DŽS/`jgD u:bQ`tpwE7a|^@0Yu#SW@?{DT=PP Ma@/Csi? dP4Pz}j&POX?~JqE5QT%Gi:J } #ŐU6,VS9zlVF ᜵;x(vH:i*zk[9{N^P tr_C[,xܨJ'&GSg-S`ۋ<Ǣ> LG) DzTR63)()S [vwEw<5Xj"wڠ sWW+~>`ZC헀j>{NbIh3bM .9*GY{!?|`Qp7Ӻ⬠׸9mՎc.y4yTZvxV_%? {ܷi?t-1Boxh쨉:y>zg]()HU&:_>/!{E]-HF:FiWucuvn30/fo OYa|rV^aBEHZ=FIFz7ЈIv:c*r%pvC3!.ʼn%@O xgO<7xlWz &`vW<+ ÚVP֞<RjU!bX4|$=$Z Ed\8 `E ;珈VJOSӆ Gx Q#3_IƷ(wȬ4*P׫T:&ڝZbdWh0xD<&{7'&4 7i`4Yw~-Uη'9>)R䡷ඟ3E1Ғ@wv.r8'Q D\t%OUj`'ضfNKL0/f䅀:\t=w,?u*ɘPS+#ҿ>x+2Qf~ T0=/)B@rيљBB4a O':n׉IE8=Oq.c..Pq3t!%Oem,IYɋ ,^R9|n"Y'MԃxZ{V[}l͏9(O QCUzIgH- "n\[OJjKKfPэG?fG;j9k3-.NPa݉+Ӓ$Gz~f?f?(4X[$ZM>(:(ܮ_°u>9efl~KTA PxeoܛipZgoIbr^߲p-qv4%g+[[6k.,:(3OFZK,& w(k[r%ɇNTYd"A `NI3=|Ll~oyeYdu1$VcC9>0 p%%FQ X)WŧRvk !+k\˯ST'l#6vϡ7Ot/s&N[R%\3Ǚ R cs[xv{)MovB*ns0 mfLL^3D%C/1oLoȺ~`a ሤit<÷B {ظt1]*ɹF|'6Ş9~!gB.l^bݏI2HD*m1}YWv9?֞" e45X,-N]|9Tv-ïL~芼=ݡ)x4+5 = " ư=Brj,B%Ccxvݭ9BQ&e0d5Ƹ̎c4]e-ӟAjMc(3sј_]p+UhL<K d iI_#TTVX@2)D_꾂L,{EO([gEI铻E(;|o2]/TGTt7j*!0.>`_mY HNQy?| Dy">VV2kI_ |zc/Ӈ!.L6,Vߪ) g$ھOd.d#;EPi$3H\S7D'#Rbz\I3[Qnzr< a0h*ǵT-m*'1ӡS6|?wb&paK@ ێ*bvхx!U*H'E湞&y[zq0*}/  z Xd/BJQf ҟx0n0eBC[b\ "CKײ1Z~7ϫın"C[O@fumoわ`Aɕ>1yX VsA;)#~kqUP TwW;$< 1vކwQ[b.ri츞[{3׉?,HlN}8si5xh_)3ز~-z 959ηtېѷ( <(Kz/p ܦBn&}?v/+A7 P"V/P-p:$gyǢ%3H)ݺѰ}r18ai^!Aݫ6[1Һ$ qxWiNeGT*"Rтy<ؑMz,;)+J!X*K3v(Cc<g?Q_z2ŒK竦o60Q N2E UO<3G@2~-f\Is4^aqe԰F%.BszYS'IӇ(!w_#}_l}Eһٝlsl73|쩘bA-s&[6ƧK6+DrQ2H,1 4I{!5j؅ʨt;879Qn? n7dlNu.7}O`j Q i\+`pOq,Q; @oFL"~G&R:i.]1=XC"bσ=O4$<æ/u +Bv27[lOO1N78 ya\Qem nJR@&-c-`s>ܯ8R?ao>&5Nve7ӯˢ9+& I_Hn7R|Lac2(Z2 ~iQQYD?-ÈAfo+ \ݎffDS~|߃"kHGiA{.HciP8;f0^5Vv0 Blڧ&DEY!2QЎnV}c76Yb?V(dgL:?K&V~gMEF]W⵶uL7Z2إy)IE~#ce)Fә#ƔlPC.$rh&Z2679^Jv0%Or+OTg=0j eqXaط<=khKGqP !=x\+CcW/V9Zy@u~ш lBbK#N*24u¸KIJ+pHWl~N$PӦѽts݅ڗC{m(Ɖ:3挓 .A ^Tfbek(!B gH]] gg8"'E]$9& 8#HiIWEG&$Pבp=x6lrlX> xvd v8Jauu mbGsx& q5ƍ~0 eRqB)D<6by3J,h`ay4$m2 l+N2` Z<F("vR+st(Í6&2p};\=WQm8g#dBS%4Yp-/@.$uX.ID{3)9ܕ 1T$Or ibN8^n2}OJ3uoLREVdYz(A3'ɥA~к̽-QP~Iarޞ"z#K[t IK"%#)/vV,F QKH‚^Ü%/?."bPXcOG))JepAےCdV;8 S; 0P\Ɛg](W*_f\wAMZRW};e2*N|3O`j:Ee$V͕}LJt(G#`ʺ^p}{s<O0lLn_E_Dm(-a]:v]o$\+78eK?y>'wk^ Oix6aU-D/ʈdtK졀:^qrx5x>Blk~?[/;6ݮ=[] =Z1 RC=SH]8%D?$Fa,zHt P!#b9 .GEFir?>Ig& jZ{V9+OJB2Wu#kخԒF24xhs tgD ?JNFC$1v>?V]JWtD ,ĭr8N[05F]ҀY VB,@m2P8 H"SА;O-h;b%c9r>#/MRNHeIS/S]zhCbaseBΛm#3{;<!d7V~$x$e컕%_ Bc34g=vQ e7xXk\sE:5Q1KbrlY~y|l {Ӷ](*z=އҽ\߀2)?+56{?~ZBbiV}޶jʏA[s<sՙo+wR;b#A kxg J;r{Gx=ر7x| ث:5άwy*3F9AP\.2Qc 1.25 MB 0.6 MB CedarBackup2-2.22.0/testcase/data/tree20.tar.gz0000664000175000017500000000172011412761532022523 0ustar pronovicpronovic00000000000000GEn0Fy&gUu|!Wu]9"nXY?nW<>qBZNW)z`ʪƿ[1- L_uƿeW`ː }hgzYpv؅!7?w+ _wq^*Kvfxkfx]3<.5XZ$=ۿg0m_AU_H/aM#GבG ?_ØM0+9K{_yK{_yK'?g_ W`wY% $/7n c,??_oܦ{`w?_A5k\Ϙ)"/7k[ab|װ+0W0|_J_9KvO0G!S%X#ۿg0mO }K ^꿂$ C+9Ksۤ?_˜˿{O'O)?`wo&}&Yߘ^P $g'K_I_o&qs`w$z6=lۤۂO9%濄[w _JIs !zCedarBackup2-2.22.0/testcase/data/tree3.tar.gz0000664000175000017500000000113411412761532022443 0ustar pronovicpronovic00000000000000;AA@}܀nqd1y<'OjM%*>nÇTJ)4:ZjkoׅZ~JjxÏrߞ>}n?_o>ͿujI~c#Qoa?i9-㏁3#ew =k߁cAoao&6Coa?h9={P!gAHYP!gA +?-Ll'? ?,GCǂ ?,GC_Ko&6/ς̿HYP!gA_HXP!߂O~RYP!C/ς? ?,KC/ς ?/<:xCedarBackup2-2.22.0/testcase/data/capacity.conf.10000664000175000017500000000007311412761532023076 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/mysql.conf.40000664000175000017500000000053211412761532022451 0ustar pronovicpronovic00000000000000 user password bzip2 N database1 database2 CedarBackup2-2.22.0/testcase/data/tree4.tar.gz0000664000175000017500000002164011412761532022450 0ustar pronovicpronovic00000000000000<AɲV=)A=+] ߼J+]|RhKZ1OA _AEy qAa ~ܷ,?c~-^6Nai9~: (+Av^?«&rc}8C ?[r|YZC8 DYqKCxcwlR16pi5p=Ʒ?*e0@3}~-h>BZ"!J9d0^<AAq5ʁvu.5cfSHz<XĔt͊@=<,2}g"hH6@ ׋gh5s?_7@^#LBS-ɌU{8JEeD8L̛P<`"/6߿+7«-90x(V/eQ)d#М Ϥ/{2:y끈r``Y},⎈Y@WO832T>p`O9<' L]B.5JL(ք=|B$@qm} ,0Z{0v݁pvF+-M-JpDCrKLHJ XO^[X0d4@*,k&X"g\ڤx2*oK;tLofI&Ƴ(]_ƫcffK~TED\kE;-4˔rGN Ӈo/a9d-PQn K+AMo3])8/5qZ/@bG=S; 职۸(3=)4.Dx*@y\i\6e5>c݀ h{.,99Vt#! tl2|$ބ$o7;BW{'21t \Ev%_*}>! E)'JPF<  \)1 цEb]+7 h]{w85j}n%T"7d+^r6Z?%u缎qI N5MQR6M`ŗYvdtoT:p|LFE< TI.%_NhGR @P\8%}hOIHD<ɟO~1_xX)?9:w!"*=B%/b%@3G| ސ#wCwO:C7*4B]>hJ\.M A;Ϻ܇1j+HK >&n<Ԡ̅|fsr Frl k(|Ӆhե&j/97~ o{Jϔ;_oUϑ;^$?wO9CO|«z _?`> = =E \&m̭fj0*Pf`Ig_,sbnK!Uє2qҪfޑtqPN_+40"!D(̰[f9XaGwvRnk 0zI7(BXeG! +EGL]f<8u|$| @^2kEȺa7ln˜:&4w-ZjXbU+??_g-x~: ngzG8z7M9ԩF,@QV!Y%t9-x"{v3l+0$eE0/l>??j|Rn;0qv!Ojޣ {wĈȸ}\sv4G_H%~9،ؚ*D%{yfikyĐTP,6<5  7}5E '%aWX1]7B-5z7 U iQ`Ų\53LLۙ}#"_%dY.Ze-ؗ5@Z֩Ε gY`Y"SՌ)* 1rgfA/qih,+5sK:,k9s83e<'# yj. #Yw۲J2k#3;U}YSf rq<\1֋/4РhUeUm WB{ب. }h?=}=$z.fߴ&8W~˳"LFjLt:tõ= bEzL^=X} 'Y\L. i0]޽slZXrPY·ByxYr+@8C{u;_IA$gLbY_ۀ~6zmpPUuv2( C%!1- x*u,~A7"i/i&Ǖp,Sf1PDvB EYWɭ3X͂$61\Q̕w-EG'_O!g-|!=?d3~K~~8l$=zVC$("R AP8jm9ӆo۠Ep8:J6haZH$7b> }VGMbGajEbh<8H9u Sa{> 6X&w-^A<-Dٷ{n=Ϛ *0{j֤?'O;I? pGQc =oɉr$XKEgW'X8Py$@;,hR6IӒtt |}~6Wb}7;3 Nx25{T8i['/M=T?Ϥഫ:(4gJ*G*VsgYN-[Thi:#m-A\-:/F 4SWzbM¾ Sp *m?߃ ?f1Uc%Tɳ=TiK9lvs J|$ LtALtr\b㋢XHn\SjAE)B!Ψ)3qs3lnKGQv0anD(H䛄-UyqTYкFOPtϭOrGL*c0ۃ% y2pO39TʥTf3Ǩ)(MC6:hְV%**^/>Q42d! dVn &.FJh @Pʒ4;:T._?ASi\&9> քGdA%BO=ԡ RL[(79jUG+?1?_hTd]gL$ظQ/o! hŒ__RW y9GFu|7jO@7M\O.@ӗ=ﵜ~DIrmZrdJCuVШΆqr:8ڑ'0)4hXC;u1a|qvQi]srQBtd&jF)0KLb^N!-W ֕uY9iԝT]{`ɖʒYSq*LtM QyUf&TkY^Nn͗^D^8UQ ?-~GhMHE:cb<uUs5-2yL#{פD/N4ot-"YAѾe_?o@0Ke0`)hrnڕ#8}V"6lЪh@TӮ7ON֞tsHShSLLֆɗ/ؐY6)+[]Ў0[Yf 8] MP %?37jrߛRO! ~g9U1H7Ur q= 8:Nal)he}I]!lŧ?kȏFvDa;Aؠ`fA5T*nE(ϦpEeMH!-tE?)}rA/dS[xxW2 =:&Bç.C Emr-igMui~h*֡ Z™NPh0BCԇk.+dO̬ v%[@0>)2}Wo!NjLɹӹ %4 ExɽƓ.`rϵD 8ZݫK*+1p6Q:"ORp}ɗ $( t]mgѮޣq$9̧*]02ZeKGIG͙1Aⲱ%z\L^"NY3mQ 2BNwU"x54YKԳdryWwie2;3YuN*1`$N; ȲB_R%0 ͒R|ʢ1ߟ CgkKT~Ψaʕj_U`hQ^yYsT zfa^Ryt"ʎ,q_Do5mETzSF\y$l ~Re!L1/(x]z:xhCPXiZuΑ{Ӗ8aН. t ^ yn C/8ewS8t3C8`+XIL,$jCrꧡUs-Ѳx65 AFhT M^7r`[xe?Q؛ kr@2é*0L7#nO'g;I7Qܭď(Oy~ƿÿ-PZ}Ȗ٭v+ L JacFP554xkmд@Q%ޑSwaTԎpQʵ;ܸ)^$vXy8 ~xU q6}?ڻ%Y+ {W%{_;L c:BRH|+,d&hXl)T@2pXvٯqvqnr-YY``d 8̤zhw切QX񥭯7:fxҴ^mfS;d(\A[3s -Kcћl"i%u(Q";|2˖ol2XG} 5+hځaCMMgWF ͳ)%*&G(ڟ*9K<[w@k~[C;?B #'D-4I|ZJUޚ&jHx+X]+bFxin: g9s ݙD\,e&O[ 43iIxN7Fz`MGg<,;(h_}>=DX/.X8#1z)Q}GƿO~ĄkىR&Jf{|N 6KPhf 0p/_=vټPşӯ]L*Ksÿ0xGtJ z2lZrO2pd;[ `{1/Hۂބ1TnU0\<+ff3n_/5 Y*Ҳ́_˲L(g0l3'7{L )f7w+'}oo#,JݡNx5͵vN??z)%$^aKE1ӱ"Hܹ!mw_CNhwlsFWXUZ3]7ƧVB$W aعnC%xڇm=;, sMI\a5RUϿc\ ; AFPhB nE6w㛤f=!zYKk6fث[?mǚZΗջWkN'w> y5I:by^Yf$!qհ ێƱ Lde]MW}2lJ>aşuO =*'A$'WݤU&yV$&;Lc6iݑ \@ҧ@p:Uv4"h 7[9:g_ LT) N5 y%5=1$1<,5NҒ-,>/~d"&P1'dUJ#ύLJ13] p>wI«=e%s 75F~N1{SR۞B]'Far<.lI'' F(ks8Dӣ>]|'xq8M5HOID-lfDq6W}nhGFD{|]e"/Y|_4,u<ڑ;[N"vDg, -d%#c mS)Vi҇E\&NOد~myN-CC K"Z52`?.4&s$D)̷q.!U .e^/WzۛS"GC=+@lPe1x83Fj\6= ҇aX1 !yynZnw? U"p3p`p >t8>RKB~;fvj"A,L3va=u2Z\P"䞎H_[ O']&x,t:Qǻyv3Ɲ{`6[J h *lU^ fX㮖zWBvnvnvnv7 ˳ACedarBackup2-2.22.0/testcase/data/cback.conf.90000664000175000017500000000053611412761532022360 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine2 remote /opt/backup/collect CedarBackup2-2.22.0/testcase/data/mbox.conf.10000664000175000017500000000007311412761532022246 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/mysql.conf.10000664000175000017500000000007311412761532022446 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/cback.conf.110000664000175000017500000000044311412761532022426 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 /dev/cdrw CedarBackup2-2.22.0/testcase/data/cback.conf.170000664000175000017500000000061411412761532022434 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup2-2.22.0/testcase/data/tree7.tar.gz0000664000175000017500000000026211412761532022450 0ustar pronovicpronovic00000000000000AK0.;>z6 _@8ЄDH79{ڗJD\JqLMQ aRN꼱qk=ݺX56>}^Ke!sD?PZ`QD[x뿾4m??cGU!d?s}(CedarBackup2-2.22.0/testcase/data/capacity.conf.40000664000175000017500000000025411412761532023102 0ustar pronovicpronovic00000000000000 1.25 KB CedarBackup2-2.22.0/testcase/data/cback.conf.200000664000175000017500000001222711412761532022431 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.22.0/testcase/data/postgresql.conf.10000664000175000017500000000007311412761532023504 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/encrypt.conf.10000664000175000017500000000007311412761532022765 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/tree22.tar.gz0000664000175000017500000001401611412761532022527 0ustar pronovicpronovic00000000000000sHǮȑ{ͧ Y{c{oO?ՒZ>-̜oQI GF$E 0A` /D _@"`α28MWWLkެMe(| +G˦/@1~??y?A!40* se8$v 馍 #DYR($`RM*NJPGĉ1a@u:fFMIE zCTa\8(Wf=b1>T=aKfJxpm\R9Pj('f?bSٚ9ےV"j=䏱?zb9!ese`z팔5֥4I٤[7s';_^f~u g.K,!tYL13Ï_!?{_Amʮg1J \ "ORvQAT!CsAؖÄ``:#p ZǔA#?9G? 2.J9,D ɯ wq$V5-)lλUB\,NEqZJ1β뎊WǤ,7{|^@' \m*+^ή({7b ]t#R+l86##kӉ3#C'ZBk ;Ϸf{-$c ol7aKuH/uB./T͞Ti]ft?b{T!QDӦLȅw6Msn0 ]kaaȝxoIA>;hK8ʗku o`O ž<&_21I@]cw%.С*4jtuE(IvJ?Ɉع;J7@0yAo " Z1NJt %kj[Yp|O—;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Y;v{n*N>ezIv|@`NW!rEc8#-{ 7&`Q~~3]V X?n}dV+x4' ܯS" R ʩ2T0~<#0y x5izq!~Kb!iJ:MM1W=Lhԭ8ivV78p~o+`~ki͖X5RX$-:lue: _NU:қG[y=Ri2c9ɥ `zXI_hh=zdI NG~w?A_|t_Ti~ȏ?%:O˹bThg 0L W 4բlQ+&,ʮ17X${2IπISY{k+n3X7l^b@Cz<Z.J=&\Oڗ;77mzD5 C6l,{n=4!˲H7Wڹ3ka^fED Xw]"wN"Q:r5jT6°V&kqҟ,3m NҒ;;c\eΈϱXBa3K(^8xWu˵/ĺ;O|άǎK`Y-ec $]`PAl{ gjy:;Dl&H~ʆ#&Czv,*FGDl^ٞ17bu\b,Y>d&ܷYՁg%gEF&"l/zQz+<n#ᨱgs3E{DD}ͨ1Dpt  %2)j @FꭚYK{ | >1҅^:m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}X(cRO&ӹ4>@r.jo!-'@Q# &1&esһZߵQ rWD͛2n.W"L[N- R괶4Q>S),\X;RINv<#W>‚ (M\^#sW?c^'1Y1ħD_3ǁ?m6gl SvCa5nݵn3j e+ġ7 Zqz+$7̴M[mN2#W@:7Xʹg`0%e#pdAFk:&AF$CeޘJiۘh f}CjɷD^xGo(] CJ9mh;Kzlj;Y>1#+yT}+j{S+yw@{ĕԵ _SfWtJ)X~z4>wIk. ˍ}_>W u7.&^@;)s*">e*jXi}ffbpD Qf^qcjNjV rS]}0 # M. bs z?/yY:pIs%SQ榬:01lM<.¶t蘆ÛϠ^Lr1r,itAtήkd(:3%c jħ2eW_ 3+RW65v:jlƄ&\^|z@2/!Q?c^ (Z# Yi؟×ݶ$,ZȮ:ie_e3tѥ 9;Ea0Zξ%sKx/VQDBZ6BvI1覄ת$}7߻.QnL@ie"w{g({=ĮKE c e>sC̪+/j&dHd 'dpw:Sd 2qSo)pd͔%YMJb$T4bAqR]𥳐X%r*8t7̮zml Gw l,!n\vE-01YUhvYga':-lOFLCP[e} qʢX?X\ĻBS gIcEHl3 Xo,L!+HB"@xis';c>]MgXI~YglZKfClEH\ w vm wc%Mx 唺ei]}ʥiR^ Y$cJY D9GL:sS&f,d] Eэ0&0cY[!Q!Wq IkVߘd4H2{5EV3 O%7,'-8V>]vq HEo^O :E!40B9Z!Ǐ?\V}i|+r"TCxоC"edU'V1[! D9z&8T34!;z$[uGݳtYMن_b3Q{^;~(?!tlڄ@"=z NQ!H N8 ^)Pژ!fϚPg3gs ]:5'WNx(RkQ[RM;_ 2Ύ"7̖3հ4fn&| f;X U_J5"⾣i&gM6U#Un饫Im${8f=*QG۔([\?:1VDQTeSRS!GvKI}???Y%Eel0`_BtsGS$D 6Aղno^%ɳ)FGzn^FJ3;{o}xƍ( P=of(:!jې2 O^,Ӕ]ɹb'Ge1ګEI뤾rϬ!L~qYӺC8K^WT•(!'-ǍE~ܧ)-R59՝SXBr7O04\ͦw^Z.tK \h pJ%*}5;ՔoE3OncFÐi[ĩI}ɖщcU 3C9+3ON/]_ylcSHT5ET2f̹H,H }pKI2CfD+5FG9]:]XZhTP zsR -ijH0ԶF$B?XD5,rh;{Ys\c ;爚|9!V?t1"#ZtdBpBpo0?ȩFwRt쉡3]Z)MWc}tVzTǺ{u6$b:y4R6m'\*;.lj56[?bƈ|E^Y_waԳ㪘,-CludvƤht~$")l+SJ۪H9vĴh6C<<Cx/PA?/?PPC?`P@ut'2&}7q_s/i^? ''^!O}?xv? >ċ$zo#y?@ByG`CO۬Zkk?f`Q?~_?|abCedarBackup2-2.22.0/testcase/data/cback.conf.80000664000175000017500000000372511645133546022370 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root 1 /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 CedarBackup2-2.22.0/testcase/data/tree9.ini0000664000175000017500000000042011412761532022020 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 2 maxdirs = 2 minfiles = 2 maxfiles = 2 minlinks = 2 maxlinks = 4 minsize = 0 maxsize = 300 CedarBackup2-2.22.0/testcase/data/cback.conf.50000664000175000017500000000060211412761532022346 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B CedarBackup2-2.22.0/testcase/data/tree17.tar.gz0000664000175000017500000000174511412761532022540 0ustar pronovicpronovic00000000000000GEjPE)E?' n n!_IGJTEebP\:8yS))8<f~^oөqy~rEAXnp"Cx߇c^?Ax l~]k+;]c<7eE Xz;;ց-؍ Eikliv6O?pmc?mL_0Ypa?d55`'?w#KL'k]`g>OOS+OZ_B?|?m!_6_?*GGpϹ¶_BXgs+C!Y??H?~  \!Xy9Zi@%p'Dֿ;۸iW;6Ook?9{BXU) 9?WZ uZ{rW`g-4`'ǿ?V0 a=uϚkT -?Gbs??oZi?@g_ؖ?!(C(CXgk7|HZFb\!Xy@lWOM!8ߒy#CedarBackup2-2.22.0/testcase/data/split.conf.20000664000175000017500000000025211412761532022434 0ustar pronovicpronovic00000000000000 12345 67890.0 CedarBackup2-2.22.0/testcase/data/lotsoflines.py0000664000175000017500000000112011412761532023200 0ustar pronovicpronovic00000000000000# Generates 100,000 lines of output (about 4 MB of data). # The first argument says where to put the lines. # "stdout" goes to stdout # "stderr" goes to stdrer # "both" duplicates the line to both stdout and stderr import sys where = "both" if len(sys.argv) > 1: where = sys.argv[1] for i in xrange(1, 100000+1): if where == "both": sys.stdout.write("This is line %d.\n" % i) sys.stderr.write("This is line %d.\n" % i) elif where == "stdout": sys.stdout.write("This is line %d.\n" % i) elif where == "stderr": sys.stderr.write("This is line %d.\n" % i) CedarBackup2-2.22.0/testcase/data/cback.conf.20000664000175000017500000000007311412761532022345 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/cback.conf.100000664000175000017500000000162311412761532022426 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp CedarBackup2-2.22.0/testcase/data/tree5.ini0000664000175000017500000000043211412761532022017 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files, directories and links [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 2 minsize = 0 maxsize = 500 CedarBackup2-2.22.0/testcase/data/subversion.conf.20000664000175000017500000000046711412761532023510 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup2-2.22.0/testcase/data/cback.conf.60000664000175000017500000000176211412761532022357 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l stage df -k CedarBackup2-2.22.0/testcase/data/mysql.conf.20000664000175000017500000000040711412761532022450 0ustar pronovicpronovic00000000000000 user password none Y CedarBackup2-2.22.0/testcase/data/tree6.ini0000664000175000017500000000042211412761532022017 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 3 mindirs = 2 maxdirs = 3 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 5 minsize = 0 maxsize = 1000 CedarBackup2-2.22.0/testcase/data/cback.conf.120000664000175000017500000000136112143053141022417 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y Y 12 13 weekly 1.3 CedarBackup2-2.22.0/testcase/data/subversion.conf.10000664000175000017500000000007311412761532023500 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/mysql.conf.50000664000175000017500000000046311412761532022455 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup2-2.22.0/testcase/data/tree2.tar.gz0000664000175000017500000000034011412761532022440 0ustar pronovicpronovic00000000000000;AM081ʂ_QDM'c=yK2+W<9''qZ3&}8mKӸcù߽+,/?1rMRk|?2e W_Lg GS=k55TD-k7ٟR(CedarBackup2-2.22.0/testcase/data/cback.conf.160000664000175000017500000000052411412761532022433 0ustar pronovicpronovic00000000000000 example something.whatever example 1 CedarBackup2-2.22.0/testcase/data/tree3.ini0000664000175000017500000000041611412761532022017 0ustar pronovicpronovic00000000000000; Higher-depth directory containing only other directories. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.22.0/testcase/data/tree1.tar.gz0000664000175000017500000000177511412761532022454 0ustar pronovicpronovic00000000000000;AɒHkSt&$RdVɌ̈>}Watm{cUW\pN|ٷq xy* iWQ(@8$^$ /s?/?/u[1\v?~O"~ɓ2AQ'hG&^^?0in/PS u|g]얼0va|0l戧8WӤ[AVxl 5x` y!o՞f}V%"t%;e4 վ|,2.85TI̛kװKɫgpSM@sEurSݴ&Ɔ HgM<8AWw2S_rQ)aHjkr=r067 #z?2[Gyr-0 6me阤8\2Ej"y't#VئlH2Klj)^I9F ^%n*ZkTX[{m&--8ig"Kǃǁ1I>JO<)8 ?g?Q5`\BPx'k0ɶwY0cN˅\-͡boSj:8ږa]!St3ޖs6|5K5cZ2DcSfh>ZFIN{+.B5+GOx?ĉ`oN<]eit#G%zÆxz8m4S$ Tv]@'ROs6;q[?9(MG2hlF @ڈK?T[=:)8,Teŕ-q[uC4fi|v)scxWkso9ʃD}i2U# }mك~5LrX_ 1 㳹zR1 Bi*[tL.(غеnI2$u Z>&jh1Z,xs?G<@A -Q+(CedarBackup2-2.22.0/testcase/data/mysql.conf.30000664000175000017500000000045711412761532022456 0ustar pronovicpronovic00000000000000 user password gzip N database CedarBackup2-2.22.0/testcase/data/tree6.tar.gz0000664000175000017500000007050611412761532022457 0ustar pronovicpronovic00000000000000AǖHsS |pZZk#LfN<'*;m k\:g__P  ڂ@ ]C@e~2C?Uy~_D䏚GǾ^ao3G{{q?~#?B??Uqaۇ 8t?pރX0G#UܧmSYfBdE6Ss=:"a؜)7nkQ?M\O+s}3YxhmK/*uN팣ȳYCVtPͅ_^$˼&0S$1ō`{cx ({-rѵyD=?{cbз>q)yzű\sv Z#~< aH$1Ba4f%7A^(8iІE&@jI3˷. D&>-"ܫ7mCWm"mOHRp802+6,3"K xt@1~1쁅F6(3!Ӣd%eGj_,#b'DjK1GޭG3rWwiYk2ù@inKJ3ۭֆ g &=87u,X (5 mH^e<ZCw'[< tn3=Hf;i[$LfcaCn$ v\èbjj!G,Q|3))hJ@FJR| eHB졪i$H_F B/Ւ ǽ&fRMy{.Xt, ]@er¦ yUS n^8,M=T—&U {A)`%[c`GU֮42.yPEZX/[+a&$[K]ķ)zLQ CSYK^KW݉2Ng٤ d 6؟ VQ(>D՘o +xIjv@[qA+J饺#c*1@s*Cf4[siFW_B ^x)r@\KPp鬉h-ާ12?0dHٳ^Lz qCSnwE% ҉z|V05Iь1TOh>1w }XePv(n6_}]qG߭&a3eS$'wC56<&: N|<8||hU`RcHr]IIMOfInҼ(&caoJښa2LX؝) 5M?ꋃp -@|,rJ,"N:9~Ln~:7; E/P9+qE5lK7pq; [GS*`M.̴[,9mzv^]9$yוb} QhFJHQY;;y%?(pG~Ԩ/Ǝ> gkFLC:5gsm9Lӳ~$݊"M6D͕vG|[ Y%Vu~X4[G< e9w@v,NY)T]M$_q-NCoň(C>!*Aw8?$98D1ܔX-+nR2|u9?[??n}0HAK0ڄFb@cFOUql3c{ff#[`eD qᆓ`1Y_H  YHP'Z4EBG@{*iŹ/3|~YGȷIy /C{M#PiZ&\j,Ҝ@xT4bv:)k64*6Yl.L!.MF]_-^WF[vɑBV"9d)ز&,l"z UESlTs0 uAuM쯇J/R::b&GXjp%y;BSOU/?L@lR(OwB<2Ϳ?D_=|~s~Y9 +eNI >'Ji\DP*Vؓuq>ө@tFȇhYaʴ`K.GZbX'[SȧvQެkSK m޹;} >urTxI'@zj'Z[4;hoBx(m{O>oɳ;!9MYǙNU>܅sշMd1]".c#q}p][<P*.!8IW΃"5 P Xuh4}M6-5*dg.ݠX#G7";O+VGjk \!BP.OV|ce5(E:+GMhǥ74>;Zuݒ8# asnlhMk2< ۷])U+RYUIlx-e $Raw&~̓L#S졣1ױ;w0Y -! ϐ%if|rK(IŠΫDC"m'#1_Ȃ-s-^v`WXMėL W^@s֩JIrp݇J#^@ e@jIsC{Gwëy['9YKKMk2P~ܠ /+l-$ͳ~ne#=Y9NkM xES*h'|DH?bj_oQSDW(éy)Y!{FbUyTq0ý;Zi'=NxqC+_̡8oP{"rUڥ43{ؽՈy{DJհ;8RBJZ";.Y>QarOlܓS tc=Ϗۈ~OnZ~ʷNʟ_^"+V6<ʉUEؙlyB wuxwd& 2 y'@@Zu& c?^𡕑Ex3Ǒ_Ddp ]V,SyqDL]Glɿvmm6Ue!r̈WҮ 첡Y>j m8]Wv9#x`˖3  Rf6:k )yZupiB:牗@ wKHْǢ0@| ]ا!J9Mܒ=|:2@W' Ru4yR)vnCN(;V[i:xk@TY[9m7K_iڊpV>KZ849_ KCk}DG3&h-3㨘q?o'w; B y6zATw YZK <#ۖe;zLԥe|4.@zjz rI!9%;+wc9yk9RB=&ڌzb?hpۈPN)hj*}Dwḡf!!`zyN5htP!;7 ;ϝ^ -9Tt@c<w2n[N[n 'KVS 0мC+AU.WJS(7ѱ)%o2V>Eh呭\EQDt˰)0yP(}ze =UQ\\LzV"|vom- ya)6V Gy[G)z 45<彍HՈ4`T&q\Hk9n4տ<~s7`ګ?ɭ0Ue MyWϱ\mPOop}F>?g??Ϸ)tu vƚ!E1uHKt@oοNʿáe/S/7~ ?=K||-麟K߮|/3 '>ytߑFi:>&Kn<>ZH=f<1Xhsa0&) ϳI'Б nţVFȋE5"3ю h̵_ )Ar # y1: 0o|ҢoŇNIBUoG r)}2bBB9viǖxȏ^ٽ(e&uFd߾ .F,ܯͲc!i#|t"VNR\rx9LbtSy;l&i:+wNqD0!rT`Q>?Ģnl"., ^d)'kQMjQR)8TBXjvR0G;'t< T{YŹ唓m|[[ֵqF`J/G2B%^w֔vI둮ͽvG侮kDO֖qO$wCT9O؍NثM;@W!Uy$*aK-t T0]?7R@DSYㇼRWQUcPn`O8gB4@5~lɓnEB#êmsf3wyf}>Wvq,IL}'Rp!o %p(i|m$|*Fjds-&psz ?"9 @| !_ ]E+&D;^G3ӰuDwj%A]Wt}Saglij0nH͸̕\17ZyDK$ҧnK\.)@fe@As|A3;)@ӡm÷C^"^ }3^2[c4 wp)ř ׄR(dxp?_w̏%̭j+zwbᚢa}SLO2/6u4N(>!5{a\"ؔ^'I6ĉsm)l{`L8%z'#_E_TDvK?2Şsjiֶ#;|ҭ.>}'YaGfgM2RL3%,Ё5 & C]@{*Zj تzKmܯpD~\Pbȧ $AʕdN<ѧW]:@',:+k? ߏ8E3o(g/WNjP_/UVV0tl|nsv{"0])J[CܰBñ hyn~]T 0 8%L|Yf#T]V+aR3I )G 1+}lݞsML; Ě=SV-NQPe3\8v4A1l%EN DV/P[;# 0Zv`?U,&{7xɩ Gxk.,C>1\n>s~dYڶ&YYfʧ͖L!Tolh(=\ !p[{NI2G/?q[>?i`*YqWP%{6@Bcs8^UŋRxuf wc1$FBW,<` Ŷ56d#GuXfM4hM6P k}oAȹ+HTV^7@5>aa\U&Cڀ4K"Ξ6eHC @T!8Mhqۓj{l(XMv!CȈhꍈ@p7]\yAlg' ]!m;w˜uz@Z[W75f3U ?;@ݶ`~}L$VȗI/aEX&<3/\qeۉd]/IvUs)ʬkX0F`)KfhXc\o]q.2" v2ޙÛfl$q$7{);+Y0'bbN~>A`/*n^jk uT`XE2p>֬<(CljvTι`~aIbV2+uM TF:ȏM\gq.86jbf 2wRf >$z$ 2rix~+?3?-*~?QdK}7Eg P Bh)b;+rkGބ-޼o^_o1?+??cv_k?.?+D ?{oF~|dOW?߂/[%v?wWs zbO[gc/ \-|^ɀQK%Y:WP"I|uZnHkdy7ƂOߊ.o2_B???=rTE`(c0 X %zX`{' 3+}oeEt<á+/*T"=1!a@(똜}dozyy_Cǐwp#%F.5QN8:ѸW;KTKNSK#"Q9镛X :J]VO0Ť5ky.IwNf$C:PK-q> у鋭.7Dp1" ,J7!GpNǞU:ŅBAE療?Su}Y݈L ʁiWrtQ ۛyطFtBO13b <ie@Mʙګvmavֹ9ބ{fL(S)AfSxA`{ ]gܫւ:[33N˞ y[kzC%Re3q改LP3cT!s ]pT{/S($"@̟OA_?h >i˨1X4(+";kqX- O`@/IkҸFCdo^=#Y݈Y_B  3P^!<-lBPF뚺o6u1OC?l8x^{:! 9QF5Ձ{ޚrM]xaThmOUmX_gFE{%+scnmyhmcJw=jJ'l% ʦB=*'a.6-BuEc&Qd>>{A$ock0 s'O7@^wӴftΦ |ZMQp]; TFWiVyVٍXuI ҵi}9;%.?5MrL rVT~P oU^9X)1]a+0Xt{44`vNDdL{p03Eb]@!e^;/3r$x>`bsXdI1YkG>HA%Fh9[_[ZH "tUNjxս8XrdFkFS'ִS={:"I_(5Fg<ląW \%<[7 WN%y? s}ňlT=>U: :,+]XRIrB,$c<3D294l&EzЏJAyz.?߰j(3>SG?w$`uR~EK} 1Līe͔2}Z6mV`&#EI'O]d l2y8GN+t&brD.2w~U<ډ5cS[g{ |<=WGG_GUߕeiYL,~}S=FȀHSYlPuX e8;nҞ746o=bccNUޯgV |k<>^(@-a~@~?=' I9ղurAύh}KReq9gɵ ÄqU[[L`Ol񹎋;Mn2=+MX%8Q1 3 h~tDeg݅ 5r" W@ցfOdŀwڔLl/ V0wdިc* hSA"uaǼu3@͊{xW~bTK;pHӊ zbץО,A;]dL1O fv(}L]-f]99TQϣ6td6ى Um"4.4/Bn"}9y ]rYأ ;Clm F_}Ĥ3bR^C_EW^~=QQmHw-)Wi>>>Emiax㗝+ eb.1'{xBP@+""%a׳:3#_uQ\U~!TI0ޡyӲ-+}LU_Q5 1wbuԏ'~2~5Ժnj_] [ᷩ RaFxfĉ.8!4Xjc 1[lZ|_k;j1AEH٤Y#[mF`g,W4q|:CVN#|gĢNa%ow$0"@%t5 Z@_hE]tWыUAʼ VOP.cf5w5X W+Z:ΝA gg&1c; ֝@Un0# hNM _|pex8.;_ةmpZgX8܁@WHhvCP EemCI '_m\a2Cuܹ'2z*g8lPk[o.73 zh#E"F\>?(_KNub-uJ }1>v46 W>$nџut-,p3@`wKۅxg }g\ jJ9ж^tT y-Vk;IVr7HRN3xf!ejUi x-H*hPPY@@8kj^J(0(~j4:w^?$Zq{ح[QIWFTG7͛6 l"ZҞD#_²V 3LtfJJI=psצ n4˶er!v"Àƨ=hsr|~ʰ:15}kvAjdMc!lV r/iTRd׉Ⳃ%Jsj͏eƘ1Tڕ|pʞp;yd2vF/FBtHo7/SwXvGb=YEAo8~]ߛ/9!'/[~]ߚy:&Yxϧ7Kz=w}L/+5Tެi[ #8 WV/wƯ?oGG?oO ?wuy3-"+Ci+" Cp& L[JBvk6:AQ-"hw =.> C1dbhpL z} ooFA@ub_|oȼ~#M)ʑ\fKkL3 +Y\M0.(zg%iNoCqiUԪ_4Qrs7dfQ2n t<ֶ;˺kԢzŮ>?h([@}u'Q{ !FZ1E&Vᵱ-G)&%>$xo9pĒ#DO%(8KmvlEt+;]/CpϬpI$VR 1#4`E$YDXo/Q,rcYq}Z-mm[s*A$ENؒe( c֋ܲˈb[\(-4Ж FD#C&bgj65U}u!l6M'ȯufhҳC%6WԖ24-2ca&e1۪]:[lؐBjU df5J$TD"v^N8Qzmڏa?Oh`x{*7\94q'ݸ6\kl)9r3=j{6:P\`-Ăo֢v4$rT*|㲼r2]![za 9|F zݨݓv0B1Zہ=7&'@d/u|CxJXGn }ֈON_Dwp +L^2ʭ)&!0O]0fnj%tЃBҊ{ELe!#ratbk%eB\ f4r8} vQjK[)=A%@&4đpT"u: =VD[y3-T 'z<9ѝǻgaLG)|g¡`uqO6;+CC >zhʩ>П~~ա2 DN[}}!LrU# 1Q:bma ?o}go"xmꌠSU_2~ O]/\ 2Wt)xN-w^6V0.Lj\(x1-E,~ s4K"1V^Wį;e$h/͈?Kםpኀp(ɱ׎3kߩPEn~F%RAkoDY$7t ƺ{F !'iMRZe8FV]RUX"qnuXN_'k{oEs3B:djt3r.`h| 5Xu` u0 RM(.wA `T~ j@Ys -@ WdLmveZiH@9;qG;Z1,)9n= בwRS~?~__ yÑ솤}ҡs wS(XdNB}PZަ_FÛԓs$KVKuwX;#vUZcYjYCNdsNAu?fX;Srm.^]{BX\AO;|ϬLf`&S8Q& [ו\Ŧr 9V%m |kf*䗌Q^{elZ[a2F'6p݅~ct:Q6^[*@RZ\S&8y}mepJp!ᷖҷRq}roQ允 CB],Kž3aZٵsB|#? gxF?{ kCAwʚa#szd.`7Xv'̌&kInk#փLXf<h"2s>EuSר6tj;^Ǒ, kE.=׏ ( z] O+A$(N8!}Go ;tE `5\ۖSJeu}3 ]D?3=?? ׳WaPZ١iPg-_Y-_Þoy3[\Qr8ݴ^!* zmFAލ}ʨԺ%Ẃ%X3Ԝ k2\/}UF%~M͹TZĠwH^)fZ^^mnDMސhŏM%!g!Ź2R~tL]Ҁ-dwdVi)TWDb~( Q%F-wT gx!ElmNQDJ E~O:w!TvR4ƭs?L](l iLG׎d m]W yÇ sϏzL]Iv?$Sj]TN(dVF^Mg>颮 HMKEY{SSw8d=ԪcNz\$*H9z5Ȝ:& wKwe*O5%VA; Gk-:vAX4:JL9hจǪo V)(G$6 R#:XFIY3p$@5e{_F>nW4f"ޔGӴtvԱ|Ȣg1חE>OO E^Bټ14SV8ScD04@Jj$@O\*W| k҇`|/M M@^0ׄ_4#;,Y/g˥%}UVb 7zfN'%Kf$^tx>;=4`q2f۩OR{.DQ8 b k d鷾y3يکQ7k7JA]ٳ@H:G8-nܸ)- {C]f :%%uˆ(vZz=}zl/"?^dtV[ NlT&'d\ucJa Yw7iy,mrL|~ma7gn-З̕= ƅrE bUraȿ^*&4χ\=y*υQljw·swE!JӢ `i:D8L(%X'ݭؐ0^1:QY6D:nΧ$]&?-"Տ4Jx'ӁmQ$~)IN)B024^ oI")?N%+3-E|wa9Ū12OnMT6.#7^DBTݻ%,D[O/ '~ 7~`_rd}qp:)m\w:TZF,陽d=[%.+̻aҡݤ[8Vzav`gwQEſg#$9d>;6!l3$޼kQcˎUbrzEb7B%AM\E B8?q0RQj͡[ 74,vd N%EǞ n&pZ hvN8V ]~DyrWeDvd!E Ksl׸UhϮ#w9#IrP%xԚQkyfhL&i 0C<.eQE5>2jOVW˦ke8æyGŚcmGwjR6bʐg$CJzTnTnMf LK7gkh=a0R:*by#˫z?)6<(ՙƝ qTХWTd2+v@=UJOH_)Ejא%0qϡI }6#nwd8~=QW8xB:N?#>}u׮;3{MMTp;h78Hc)꫱'},)Q7Ő S7Y%ȞP1FN^eZJ\s]; 7ۑmbCG}r$OA#mcPi2G=z-]5&=ϸy!MR rRSUKi |2TK.Ot ܮU5mLԠezHigyzׯ =`)} [d ٷ]dRRtHadq&X e@ w6>pH~$s{Ia(]մخnİ;R&DϯgPΨJ@޳tNN'VTQ a=JH)JVcH<&zg-AwӖ13~/(OAzqnjŐKX)rx<i;nmX1 [%H˞#<߂.ePy*e-o,*7~SUm/,scS1!.H`ZtdTJYrpc6=
    䣛-@V稍!=$9kk>k"lh> tvpKU(&&3퟼7P.5ID(dtx}m_ΔedA{'YGx|0Ĕ=i)jc ߭rޜ _ZAx,7ݸBw|l$[sagoQGŏۙ}R ]W/}2"OWO?o俾ߝo/9N 6G; [L\Y JRrQ~p}#'#y>"u++?/~޽j HRSUc)(b+b{+!+F".s:K3L0Ѷ CX.1)~+; |xoAz,PKnEz#ՒCrcnѣJ}0T(D{ hN K:x X#v09V,:2cMV< E;|9u z׭~2㬲eMʘ^ [-g}x7eP&|;rc:gTŌ6Kmdjkrpvfӆu;67Wf"Hem9Q?8u!mDg M]ȷt&R$I=[ˆ_?8Yٵٖ!@Gk8c(hrPRH,!]E#תzju\א >\I 6z+>62C'&T90;I\{'J} Kx_r vO_܇y>VEz7QvOqʜsӬ}j#b.鄴Xz%8@gAAEUT%>"C*zk!6$u=:s3,y<{^a&]$l [e, v4l2tyF|YȨILWN#:l&5 ްRhh`92֨8d1~Fvpҿ3/-pG̍F?w\kQͽ)?4R5j K8v7_]Ǯȑk͒{u4鵴Qhn,pɈʬ1|' I~ЮaN/꿟߂xGyx'fpȢ>n%y&ПO3i{3vɿ˹#H=/p;e9H"J hg[gh3Mt"4&8qUφo)>JܞPM(!T76e(ngM1]eR b"^]Fdbcb+y肁h(PxJnL``J8XTip;@m)!G t`6/,o/l*`^uGji&P޶(&y 9m`a/%~}ClrzSH}+ oG߉|~?77m?I0 o<&_21I@]=gL1С*4jtuE(IvJ_dswn܁`x+ " Z1NJt %kj[Yp|O;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Yfdެ}bOY^:+Pk4E|t\X7k *ȍ X?,afokڪp>+Kh%'_Iz@99%@ 7WNyw&/΀WӬQvϡSz AMSin6 բ'.Su<rlpؠ4qF_Q D>V;P󉊙T#=lzBE:-NI.eMROF3F nF#ԎOBgE?tq:km%߲_w-w@>1 3#u_Yg,ƧQq{h5do^Ô2u>)HE-آV6LX ӕ]=bnCH dOכFyh+n3o-#(Wxy +V\ zv1L7\oڗ;77mzD5 C6l,{nؽ4!˲H7Wڹ3ka^f<".CS=G\;չ0!`')F:p[BN&W0}3{,ցpPXL>` vK'oqZc#XP}R֙x ,Ely]cȶxsAf*wN)l*>h2[mɢbpDts)V'e+ƒC&i}]xXb|Vd`"fmr +s+  ,) 5x 1p!؁yx678^WDј]&nF!KE)8HQˬ5z`Tjd-tZ(E\w8j\*}kFC3ԦX *"WQ7z5=k.s`Fd8A.+?4o?.1u8DfO %sOǞR{I_q{z3ATnE0aV-W.0 E&u?G>1҅^9m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}~X(cR/&r-&)^m}\j7`In6[A5ݴ10a$BBt֘(K*ed A 0II 86~0Q,I4]0 ܪHdF1-B 1@d`ՒD^xGo(] CJ9mh;Gzlj;Y>1#+yT}+j{S+yS9:q.?4u(Hה٤R Mm7ٌ5z V?9%LH%Hj &X>@ѻA֔?/KCU'6 vp*9ܔZ>%_V5rxsԇI#2OD|vv]#KGAԙ),}X'P;Ю&2Svz:~xDʦƎ_U|`XӘ$AÖ44녯XV%GJſ Es-Ѐ5ѽ 2ޕI:|mJ2ˍ슩VU6>A*]Sf{XB9$rkaLDQ &S@T.:ݔ0Zd|{|C>ʍiH>l2TD2=Fl@Q{wNZ jV+`͟eg)إˊ>ڴ弐^EL8)o/lc䎝Lk_ s9ZkUnaG;)]C"Ku:-]=zukݳ1hp3Nq63pyHeax!xx!'K]RWwRfjJ2bpLITM?Hbo3VԻå$xhOeљTΦ^X[_ i0x$pUQl|kW۳‹&tz;cton\٦3I VM N ;]!p6s(Eʼn&1I뻦sYE.b5N؜#KRzQ MxeYqՂ2`LC0Dܱrҭ`۠tpT8S<{B҆K I&Z8H$O8<@v[I'&,E>{UV$**p 04Иhu=zKbikUx@\ИÏ$UA<\q!<އ#QMV#]JFdۚc3e;DƝ-BDi~I뙛=,]gmCn.'hH !uYGBRCzk/ ڸ3E&\ x~#/ 1 y%kyxigS**;"% U"ی® 1gք@`/~BSyzSag?:Țy(1 VM/@OP9&B+6K7wbrSDhFsHm)]M&oٻj*۩7nb$Jo'VKܢ6 TI&ڦD!|s~( 撥AW"Gx#?х?~Sީ_^}}>S?޽cwx UÔ}5ݏv߯/?;'}"Bj@S{S$lq ~@8GA7U%(\+! a^Tiݪ\3M^}p/H-8 .\N{=?b%Y2ao ,OYDF'C,XC<ܨvĖ{pݐE2 jR|?*92Zb֕sRZljVktʭ4Rmc8y ]wV\nCKT +tjNAwD3I71 mo8fZ3O*k^42u d7l\("fv{Y;gnU\Ujc H uuy;'*R&UIsb#)SsMmܜhY1jXQ&gPֲo>|ȥ!5yLaAO!QJyCGjM^DplH ]yaWzѓ]6\Q bl;Lπe2S 2:=`~GoRZxb*N7V :L͡y֧t (ya瘎4Ø *ቋѭus-"Cy9bvHCXZǣduy,糏m+sJH9NČj7gߩ^/^xŋ?^CedarBackup2-2.22.0/testcase/data/mbox.conf.30000664000175000017500000000076011412761532022253 0ustar pronovicpronovic00000000000000 /home/joebob/mail/cedar-backup-users daily gzip /home/billiejoe/mail weekly bzip2 CedarBackup2-2.22.0/testcase/data/cback.conf.230000664000175000017500000000260011412761532022426 0ustar pronovicpronovic00000000000000 machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb CedarBackup2-2.22.0/testcase/data/tree21.tar.gz0000664000175000017500000000674511412761532022540 0ustar pronovicpronovic00000000000000ENVa4s\O  w?;j[RhҦ<CN=nM6R^nJԕTEZUu˞8u8ϧi'?Gw"x.Oic_[?Wﱿ| omvy\j"z~77|ZrUw={Xx~8?n?n=о~a9 p߷|M4)ZϿ/:6p                                                                                                 OTa#!1LpnOwCD6..S{DۨRt})96W4U\=Mqq daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly /opt/public/svn/three bzip2 FSFS /opt/public/svn/four incr bzip2 CedarBackup2-2.22.0/testcase/data/tree9.tar.gz0000664000175000017500000000256211412761532022457 0ustar pronovicpronovic00000000000000^~ArJsS 64KP C "OdkL2{*85ݴVYWQpSP4M>MO#ķ;4I` .XCYy&U/2!IuQr \mq[>•8ɢs<]"~J=<rr1rn@ls.Li&ÍƗ,bkysRVr%zwxa335ALDlmLL5L8b]c.…I a#*IK2|! Xh;0M|Gcoh%)I*9Uפ&\d.)k*U&flgZe9;ZMMK~Ws*F??<ࢺNcFYز"!=&fWn WϒyrYEI^m6 }N]B6(zjM\ZQ`l "&Yh\٧~,.'xJtt6f*C{ax=?ߞw_=ǧ?o_?jP}5xZPI8&ӵ[1,#|Ds(X7H[Nqcb@ܑ 4nj[6Qd$x^cv"L .6(1UE XP F`zxp쬍(Yei34Ϗ @K>_Zؙu TϹ1MUKT{dq6b܏Fٛ(=lȶw—j{Qϕs9ҘC(õ=2MplcڵN)9יɮkmwٛ@O:ﲕyi4+!wu8]]+;?rXC @ @ |DQPCedarBackup2-2.22.0/testcase/data/tree15.tar.gz0000664000175000017500000000127411412761532022533 0ustar pronovicpronovic00000000000000ZEMo@q|~vf 妼~uv7ͦue I]:N0>!WhLGso459/cl/1y^(uZvϝvxoۙf;@\m|sL s`Dxs:XYq[퇀ҬRNkX8A6j;/b`7\mem(I.ĩɿ      D (CedarBackup2-2.22.0/testcase/data/split.conf.30000664000175000017500000000025211412761532022435 0ustar pronovicpronovic00000000000000 1.25 KB 0.6KB CedarBackup2-2.22.0/testcase/data/tree16.tar.gz0000664000175000017500000002141311412761532022531 0ustar pronovicpronovic00000000000000i2Eǖș{''#H;xAVwF:QS} 0F#r 0A`߮߮ ]PG@ ~+CK?vm?~_2N2\M~oC0o_c &:0@+s &x7j򼩪mZ~K(0E?ni5)>GZՙ#!@%SOǹ&ﷂS0fy^#gܳR;bL<<5{vf0755^\ o~sjʚv8o QuiT`$ಔVdx&*N(UBd_#ʕX ҅1+ -̤cl{a-ڱ\8g5RC Dݍŀ`IXp%4@x\m$nj}[ 7d[E^<@"[5C ˍŢ/|'G_iRFRtb4)"Lܡ; r^.E~2 ;)RMezpΔv#\m-QSkSL BhfTT#>G߳'3~_p'@ɞKժG/Ө)M=f6_c<,#y]lwgb XsQOgxI2/Xr|Ev;mHŚ9X…}n7@oʜȡi";񻰕*Xh7rhxIN%N:U?k{#%Db[!ߖ2"`65OvX&P 恣| rw-Qt|A,١MשrAج_}Μ0(Lܟ;=D_ 7 m1>qK:/*.f=fvDaz~hkl*4C)fd1Њ,+nptx\@1hhE1~ >k&*fm-s B`'ƔF.3+ M^XlOMGp!HB5f`YO(X}t;lȁrڹ*!xCIzFb#/wN?[9r.'_!ß܇O􇿮 Ƕ |5[r¬DY¬@r&PTN`O~[l4 ;{Oq/Q~PmR&%Y8TnyTW IE3*jadz+n!+#ul)4JK.mO'w"[c+%]tTc4窲 2 [w8ĻqpUӇprrs-_Y@O%и:Ӱ *m&BD,0H')q7g 5)ykعD''e11[Pv!{Ti*f2ɨrEq ,Xk>Za$B+ . ,I&fW'^p$8n=j᭷?|PwKm'}/o#Da_fGe64$l[_܏m ƇO@FֲPSBF&[etXD ZNp¹gk(GB]l jO,樭f@X/4-~m.0J7c@|zu:llܚ~-P\?1X>5$ gZ}fK=4Ob)1Ԃp$eA=R,@Z/*饲">Yf`o|`{܏x6]7!:4WC#r<: ŒU*z)Xژs"i5/ç[//{-4ƵuN l~Ё&tjcqs&s*6w3*IlBq( B5c$=ZG( SxZNLx&R9{Iф!{ uFFYNx;^煼8B[ˬOs:l]8M؇j=0"}i@.뮥dsNT:ep^H@V5  umU4K -qk/=Ȩ^ CY~S<݂ʇ3.A\Y 7wFͿf^n_-P[*@#UqX$(Fim'Ud%+ =[% \1y-})1zw?6Z#Je  F~E]ݙ-iW$UdhPse$(n'6i*>z>FDs{5k ؛KHr4̬aWOߥ,# aSˮCo q.gsnZa A5|T ܟ̚/wsx?W.wEX?|\ U?|w7??꿐o;_w[4oO@T=NVpH+y/qcTnTѯT_lrxuaA{a0b^ǸB3 QKQ[`C9iu`d=rB4->iVGUau%ř7x`qFDR3KCypQS^}oNb: #uԀO峝V^=2t.;-#y pvW7܄RD/8~;#CǦO yH΀dq^u"6~=qd&GyУ;I_m!~o{ﶁB/`}LHߓLIw7t ŀVӹfD `-l7-ИpLK1["+ʭǣzr(fQiC4bGsfGsF'6`NIvkEZ 2`BajvxU|-ѽU,[MihRXn&v.c"dL:vk]i8֗3C|w$./z,x͕im-љкmR`RQx!kIhMUd"ڃ8;_;ho _oP'v΢qA B*x[|5!<בPS1ye"GW!WZkO ^]+|*JNr^(1b^, ɚu| ~߷`[4(Zw[$f&^4-ڀFږHEa 8Bj#>Ϣһuޥ R: D EimZiNB,ṅ&7bxuBqRͳތ諉N°e,*xΦ`sQN+.cy%YG< L)vMH(Wp w! ^BP(-{n%1GKG1Ё 1»$JhԏHZ1u+#r5wCm]E:A뿟/g[H ӻ$lcWC|Ո1VVW~zˁz%{(d7pWJTьi 諣$AQ >rؾ<ꀬRunӪmV[+Ү`x)B)8c,ad?3Q :w'J Ӱy;Pz5nӏmv_ ޓej֚Q^Ak&rgU<O̞e](F eZ~=,PiU@]`4k[OߵxWN0\xXqwG&/Fub5u9+Em__[C[ cJԟyP!g&|ȉ[3\Lx'Sb42˄} w' @~EJs$v'm;M- > Dž\* c4";j.nV:||Ԝt[g}z (5(ʘyŒ#NrK4[a6Nszy*yȼ|v.J105X/{Q;j[OloqaV[Zܶ{5cCh7r6]'TJO}!x(?/KSX(?/q;0rڱ$3|xˈމ37SQod/OqI9c`djBS;z.mOAguWrIf1ѡI!YpaA9Sז]ݟ~ـܒUiN!;=9JPW'mZanh{e`?/K<,U 窞ȍIׄ2)JGDVN}iq6Ҳ'~+1食N4jÕsH_/I"mK5naB`Y"Yv)3u~<OĿgdPݾCedarBackup2-2.22.0/testcase/data/subversion.conf.60000664000175000017500000000051311412761532023504 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup2-2.22.0/testcase/data/tree18.tar.gz0000664000175000017500000000171011412761532022531 0ustar pronovicpronovic00000000000000GEn0Fy`_t(m6o-eAw#ZKz}`N>n$:Ʀ &u܄ 6jֿ_a<Oߏ;{\{8* -B=ۛ`Ttƿc-p >?n\,BJW|k'Ypv_D'7 }Ǘ'Y&l_ q_2k&]% /&]k+O `w,p+; C Xiwxy>Mr1/!_?WB2ۿoI_F$}{j C˿w$?˿w$꿄!n_\)~o'P/$a/?l*% wK?%fV;I$꿄!n W`d/fY?/!?[`T7?$߄} 9W߹/aȿ[/aȿ[[_=- l_;' /aȿ_$X?rs|ܘ??/R/$a/?l*% wI?%fV=o ƭ?ƭĿwS^>/!?ecLOxKGBl? } %dVnV-a/`[%9=J~O'CedarBackup2-2.22.0/testcase/data/cback.conf.150000664000175000017500000001206711412761532022437 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. index example something.whatever example 102 bogus module something 350 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 4 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.22.0/testcase/data/encrypt.conf.20000664000175000017500000000027411412761532022771 0ustar pronovicpronovic00000000000000 gpg Backup User CedarBackup2-2.22.0/testcase/data/cback.conf.210000664000175000017500000001333011412761532022426 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.22.0/testcase/data/postgresql.conf.20000664000175000017500000000036211412761532023506 0ustar pronovicpronovic00000000000000 user none Y CedarBackup2-2.22.0/testcase/data/postgresql.conf.50000664000175000017500000000046611412761532023516 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup2-2.22.0/testcase/data/split.conf.50000664000175000017500000000025311412761532022440 0ustar pronovicpronovic00000000000000 1.25 GB 0.6 GB CedarBackup2-2.22.0/testcase/data/cback.conf.10000664000175000017500000000400411412761532022342 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration tuesday /opt/backup/tmp backup backup /usr/bin/scp -1 -B /opt/backup/collect targz .cbignore /etc daily /var/log incr /opt weekly /opt/large /opt/backup /opt/tmp /opt/backup/staging machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect /opt/backup/staging /dev/cdrw 0,0,0 4 cdrw-74 Y /opt/backup/stage 5 /opt/backup/collect 0 CedarBackup2-2.22.0/testcase/data/cback.conf.130000664000175000017500000000041111412761532022423 0ustar pronovicpronovic00000000000000 /opt/backup/stage 5 CedarBackup2-2.22.0/testcase/data/postgresql.conf.40000664000175000017500000000050511412761532023507 0ustar pronovicpronovic00000000000000 user bzip2 N database1 database2 CedarBackup2-2.22.0/testcase/data/capacity.conf.30000664000175000017500000000025211412761532023077 0ustar pronovicpronovic00000000000000 18 CedarBackup2-2.22.0/testcase/data/cback.conf.40000664000175000017500000000054211412761532022350 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. CedarBackup2-2.22.0/testcase/data/split.conf.10000664000175000017500000000007311412761532022434 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/tree2.ini0000664000175000017500000000041511412761532022015 0ustar pronovicpronovic00000000000000; Single-depth directory containing only other directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.22.0/testcase/data/tree5.tar.gz0000664000175000017500000003245311457442234022461 0ustar pronovicpronovic00000000000000AɖVE=+D; @@ߪ(+UJ%{ b6].t 0A`__ _ B A 㿀!/be}ï]M4FM}7%0+Q mvOtqAQyA5g} 󟟛qO#ק|?ZKfyzwP=ZFm 5ap"#˶$:DJ:xS,*aQyF\NoUjNKf!M>b NVꄊ/wKA g03ALC6w ?!3X|4Evrm#XN1 bU&{Vդ= nl^aR@8A804tusz['Põh聛ws;UZJšxG@f:Y# .b01R4f3vBڃ/a@ʓ(伂88.'B^fL`N74.Xm (tB6< ZoQlTWM|`2gwU?#G?56iyK1Fg"}M7q/] pTK\On+%1<2Ttl@P. PI ׼Ԥb.55ZFٰNIu,Kx@k0D9ՠxy/D+]I8HS1[G>{1g 74So1k1x|xW ϗ v?_{~m_?oCoW_OWE_{)<y_£E_)<y _?£OvO}|}3`j2GWCIuP%\54e^+@huTq_U۽"D9W|!Xg7}D3 +YIVi`x1P=ED6Wȍ աX]rA[77>_x2* /.Pʰ!S#\csU T9S!(!!G*O/ pχ(`K`y]NѵڂuT5$/%߹;3] \SnlYhiubPvbepޡYQdHYv-luT6xy9jŷbbO%d7 9'I[ 0'6/IQ_B3[*ڦ#Q/S gY }l`Ȝ$D$H!P!c4_c)PiqH{lҒb`> VW23V: `mLH#a̫$vgqO3_+3٣z&S!E:ѕS2KecdIddJt,Rqd52x'QsmθZ1sln D-4p0z8meL12\x:,W?>?ط>Aݢ矗.?AoUBoaɱ Q;i]&YIKpYRPїux~pap 4VQSPb a "x{" |ʄLYui%Er [no^Ylhg!Ngrŏ1It\iS'[-G:Ȉ}#XtE#S[# 8IKivDRU R7&-AS|":;JZ9??O7)p5Dd+G çww@[;w_}sx"O>w_ݽ΢C4qa6Z;ÊPRFN۝e=;*֮yH%Wxϧ7Kz=w%]ORnԷ%D"a\snT PYӘ20fB[ uã蹏vOfT7[ #}A F^-IrF v,{5)(hB˃lEWBUF"DЪ'i\(+ֲzIa)̭qpq`2gw؎FQx*~9F7pCn%{vraUoo(4f󑌞q*{pNkK=_G ?`{H3$k:OUN9ln橥#QMr BZm_?nޝvmQphfԿ2a~- hx]ά9n7~?YDtؑa/isZZ#EiRUNyog__oq?sS#KV`)E>P:Ӎy ,&13k±䘦PDy: D3i\㳈#+BUe̚m(kp eXwBHy7V Yq> L<8q#nήJͶ= ɾvW:]L:K-x@8۔$t'Bul$lT[i%!U=Mi`K~æ߲K@K `4ڄt HXѯ+-! ې ]h+6F#'@T'C'ޯ[yM6Zb0M z}qg/zpek$[VLD%z+Ty꒺t=5/S6Tt x8ol`'O9)M̂<޲z@"5#G$5yX=6geA!6N<H7v=+ O3NOڏ&np_ '}/}_>fGMϐmB1Ф&{˨$GWU23m_xa AnݯeXnb~C;hhZf#qQ<[C& |_}ݺk"jtDa9Nj4 Ϻdf 6pՙv aʕjܬ;d aRVt#@V"78n|">NI~ $*|/K3; 6Il!x^ NJ-9%/)Gs4a9ƤLJFEj\k)06#,bX8-^y %GH\&S⾤Pͭu=_Njr2ٲrqaGg?gÏ? yzgRv 9CaY7?]F!7~D让)8YѶu^f] t}|x"v'J(|׋ {a}?f 8kR!ɎŨFCBUR"4rKonVrqpkAo( 1:Bŗ`7s(h=U=ee}1(] qtR^sif]?ֿ?BocU*,wqhvdCoX ƍYs*CkQxE&.Y ]ax{UP7ɼŇ ̂3뉡͍yG"` D’ˁMq8"qd]25(kfv: LK,k?7 Y؛kɍj{2X u2]O8b8NN,kַC_CaC^R{7:|s+e 5J4I[$~=Ϲq9K n˃魳tmχ, 6iK$si_4%E|q;Z8! XйVsƽQWMoA뎕ulkդ.]Xwu$R6ӮdF:bųxB|]-oyJM&|aoUD&toG]B jo"Oʦ8].ON$FvgLm(Ay 0D gE|yR= -gҷ6tQmF5 \v:!ީ[[p%JUc„1cEY>hb3M}ڻ"R׷h-dlAt/<{9֚'o=pP5b q\ ⵅiDUܟ^QY PZ㪄ߺnR/H/ #R^N>yan|ՌĹUE sU⍎?qB?=";I"+ǩ=^;Dy\'q,SJs6q'Cc fQ.5:qqiqӉQ0IO\cʸqP6&GI >M/ v[H&~;i]pU 5R^FEl܏$s~V/0g~F]GlA8MlKlV6b)Њ jh?dz^]̍2t>~vjc{;Rk[XPvj$HZ6ӧ _qzUbp+pԙE[B rA*V}X_'6;Q>3p PtD̾!sٕs\,,EyWoyyJ#e]¤},7Hcnݱ' "Rp`CąllxgW,g}aBBjR!3Te mI] LNa n];&L~oyx4D`Sј$+v_cXKO[jU{Eh\~rҫHy<0I>?Gt)$E,$sNj4UlL殨w[.Xm5cH驠K~,̕g4H4"0FM]JoW8Dlb ~O*ܶ[M/4ػIL. s]\>2Vn ًQSbJqAlёIiWw"U '!L'`p$_lR XO+à^y+NOm*jM&Pb*3Q$Y&ݩI}z|օt@/?,^aQ|:)A- ȭќOض3Urbֿ~KdV16lBavJzu[ YrLOEV+xcP3yq+?Ƭ H8dl󡲛[A; iQo4+#Z .;S Ax>G/_?msG _s!!S#xC??»??N] CG?Cxן3}_>C_Eozذx?S#]Y"T *\ 1yZ.dVjP^MQ2|w.y;%Yv-CM'BlA4*@ˍ޳mddTg''Koc>X%voqQ^^8*3) gޕ[?2hX5BiNO"irnfu4㩡=R4Gr14=isįsï6jppV~O hoyOU2D1}L L*y?/|voV! ;h[{0ABKYffh=JAroOw3//äδ]!|h@`s ڠ;vI*9n_LIkE I.zW>ݢa݅*5?(^;o++He1F@H|Q {]V yL\;G02)~Y>ȌD;"8X{FN @C%ĝY eᕫ["oy> Ha ixjI\ CNҰAD(f\@#:)_EҾ MU+A%2RIx?kE  Xʵ$t4#ڦ&DnB4oZz_vvaZ#&!M2K}tβd(7ng[8Ȋh*8yOunvX~Uo߇P/G2c&n%8ZTll``#DF~)@mV#Rwf\gސ*0ْ2<`FsÜHK 'OYObFL'/G-_+|y}"92@DAumoW! w0k]ܦ'$NeD_mu,U3\o([շR#5ec R'uU \\n'ج-N$)npVgS)P*C3"@^UGQ 4 c("N vMzWf iJH8H}>?"B| c|=iPĒ#Y`.u:tdq _x<_hlrO֣n%pT-3M%DT,L"L RԮNО.͐g t4]۵RĶӣeuNKVbj*kgr$+WzI@Z R9Px*B ?G_8[GOr?;_N»;_!!3uH»;_!|W3񫐉B 55JY̥֮4\Lqcɒv;i:IuvlQhQTgxQH[]S/bo s?Ob!|gY=w?]CxCY=w?!CY=+iE8F LHUW)[眪[K^Mc ƌ/DK8faYKI5HG S[;e;d&+zKRP|X^A QeQG5_B+[sAR&w8cKW`/`]}R!#Sb$tҳZ dtna  S U ƲrEffiͪ2l]㙉Boa@jynӸ=mމr}gf~iMVn`S 5on,{U^8լmA`g YȷvXFaB.`7l5}\aV#m퍐W."]:1[38/q.00VUU8M r4T&{/צrHXZtK`c,_5GgC_Sg3Ktv翀0_s1m|@#1KjQy-:w)IҢDrVP&;P5r"`lmէښP 4s^˺R]KDՠXLlwb|+d $ԧ; Rx4^ |λf}_ wAf_ ˼3afS`d˚Ќ-ѮY>^,3)>&q@~\ EE)"Pfc$,: U`6q J|( Hg zXTa=+zYuQNʅ#)DkO-Z1cYۙmp[ky@-…ƪQN^2|,D`8&%Uk5^XAtF듯-sT";u> !rVGzl?ǒZ< A=unGVeљ:]|i6GtcdWULUwU 2yG+MuBmLD84P%FozI@>O 5^4"H,VB8g!IO+7vςsCl\pSbKQb1v |Rz?_?ϪTrIRa:\gL-q?o|kδqT{ѱ:-]=sk.QB̞ގc쇶,2g(ԍ1Ŧt,g.QICrZUXO1C4X :/#98<'SB AQZḱX%K.uC `-d?4F!һA.u7N^Rr;hQq,scxOn~2&>BP@{k$0;ag7H] f-~rᛓȀK<̓o%>2q}bΪmt_U=Cuh]sɂ l}~WG {?_mH??"Gc^O'9>JGSx;꿞[HQEVQw=cEV/wPTWno)NϘn'G$LWk%tb f2DFvpaACύVUfUNZFc*-#~@%w3KV偠[nK xx)k?x0A۰a/"TüQJ#xbf^ʾ\#1Yj(&gZ s K@]=zsbT-r0k~eݐ /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.22.0/testcase/data/cback.conf.30000664000175000017500000000024611412761532022350 0ustar pronovicpronovic00000000000000 CedarBackup2-2.22.0/testcase/data/tree10.tar.gz0000664000175000017500000000060711412761532022525 0ustar pronovicpronovic00000000000000An@`>oй2V4Dmw Ehb.Ʀ|aA`8JJn BDޭXpԭ3_{F aQʅ^ kU/i\jy>YmyCQ/*oSB3mZ0I mW=ȿ?QU[hMOu%Č  }Nzò 9Ø}1`^W? w7q#y[iRecC9J|.J %bj۔yr)Ѝ,T;W0uJaI-M~&RfX|IL sݱϥLA5bc7.`Ks8iSGWSP(CedarBackup2-2.22.0/testcase/subversiontests.py0000664000175000017500000031570011415165677023237 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: subversiontests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests Subversion extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/subversion.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/subversion.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to Subversion, since the actual backup would need to have access to real Subversion repositories. Because of this, there aren't any tests below that actually back up repositories. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to Subversion successfully. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SUBVERSIONTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.subversion import LocalConfig, SubversionConfig from CedarBackup2.extend.subversion import Repository, RepositoryDir, BDBRepository, FSFSRepository ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "subversion.conf.1", "subversion.conf.2", "subversion.conf.3", "subversion.conf.4", "subversion.conf.5", "subversion.conf.6", "subversion.conf.7", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestBDBRepository class ########################## class TestBDBRepository(unittest.TestCase): """ Tests for the BDBRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BDBRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = BDBRepository() self.failUnlessEqual("BDB", repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = BDBRepository("/path/to/it", "daily", "gzip") self.failUnlessEqual("BDB", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) # Removed testConstructor_003 after BDBRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = BDBRepository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = BDBRepository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = BDBRepository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = BDBRepository() repository2 = BDBRepository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = BDBRepository("/path", "daily", "bzip2") repository2 = BDBRepository("/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ########################### # TestFSFSRepository class ########################### class TestFSFSRepository(unittest.TestCase): """ Tests for the FSFSRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = FSFSRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = FSFSRepository() self.failUnlessEqual("FSFS", repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = FSFSRepository("/path/to/it", "daily", "gzip") self.failUnlessEqual("FSFS", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) # Removed testConstructor_003 after FSFSRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = FSFSRepository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = FSFSRepository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = FSFSRepository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = FSFSRepository() repository2 = FSFSRepository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = FSFSRepository("/path", "daily", "bzip2") repository2 = FSFSRepository("/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ####################### # TestRepository class ####################### class TestRepository(unittest.TestCase): """Tests for the Repository class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Repository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = Repository("type", "/path/to/it", "daily", "gzip") self.failUnlessEqual("type", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repository = Repository(repositoryType="type") self.failUnlessEqual("type", repository.repositoryType) repository.repositoryType = None self.failUnlessEqual(None, repository.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryType) repository.repositoryType = "" self.failUnlessEqual("", repository.repositoryType) repository.repositoryType = "test" self.failUnlessEqual("test", repository.repositoryType) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, None value. """ repository = Repository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repository = Repository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repository = Repository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = Repository() repository2 = Repository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryType="type") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repository1 = Repository("other", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004a(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = Repository() repository2 = Repository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = Repository() repository2 = Repository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = Repository("type", "/path", "daily", "bzip2") repository2 = Repository("type", "/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ########################## # TestRepositoryDir class ########################## class TestRepositoryDir(unittest.TestCase): """Tests for the RepositoryDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RepositoryDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.repositoryType) self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ repositoryDir = RepositoryDir("type", "/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*software.*", ]) self.failUnlessEqual("type", repositoryDir.repositoryType) self.failUnlessEqual("/path/to/it", repositoryDir.directoryPath) self.failUnlessEqual("daily", repositoryDir.collectMode) self.failUnlessEqual("gzip", repositoryDir.compressMode) self.failUnlessEqual([ "whatever", ], repositoryDir.relativeExcludePaths) self.failUnlessEqual([ ".*software.*", ], repositoryDir.excludePatterns) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repositoryDir = RepositoryDir(repositoryType="type") self.failUnlessEqual("type", repositoryDir.repositoryType) repositoryDir.repositoryType = None self.failUnlessEqual(None, repositoryDir.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.repositoryType) repositoryDir.repositoryType = "" self.failUnlessEqual("", repositoryDir.repositoryType) repositoryDir.repositoryType = "test" self.failUnlessEqual("test", repositoryDir.repositoryType) def testConstructor_005(self): """ Test assignment of directoryPath attribute, None value. """ repositoryDir = RepositoryDir(directoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repositoryDir.directoryPath) repositoryDir.directoryPath = None self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_006(self): """ Test assignment of directoryPath attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) repositoryDir.directoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repositoryDir.directoryPath) def testConstructor_007(self): """ Test assignment of directoryPath attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "") self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_008(self): """ Test assignment of directoryPath attribute, invalid value (not absolute). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "relative/path") self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repositoryDir = RepositoryDir(collectMode="daily") self.failUnlessEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = None self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) repositoryDir.collectMode = "daily" self.failUnlessEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = "weekly" self.failUnlessEqual("weekly", repositoryDir.collectMode) repositoryDir.collectMode = "incr" self.failUnlessEqual("incr", repositoryDir.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "") self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "monthly") self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repositoryDir = RepositoryDir(compressMode="gzip") self.failUnlessEqual("gzip", repositoryDir.compressMode) repositoryDir.compressMode = None self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) repositoryDir.compressMode = "none" self.failUnlessEqual("none", repositoryDir.compressMode) repositoryDir.compressMode = "bzip2" self.failUnlessEqual("bzip2", repositoryDir.compressMode) repositoryDir.compressMode = "gzip" self.failUnlessEqual("gzip", repositoryDir.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "") self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "compress") self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, None value. """ repositoryDir = RepositoryDir(relativeExcludePaths=[]) self.failUnlessEqual([], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = None self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = [] self.failUnlessEqual([], repositoryDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) def testConstructor_020(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], repositoryDir.relativeExcludePaths) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, None value. """ repositoryDir = RepositoryDir(excludePatterns=[]) self.failUnlessEqual([], repositoryDir.excludePatterns) repositoryDir.excludePatterns = None self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, [] value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = [] self.failUnlessEqual([], repositoryDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single valid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], repositoryDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], repositoryDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "*" ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "valid" ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir() self.failUnlessEqual(repositoryDir1, repositoryDir2) self.failUnless(repositoryDir1 == repositoryDir2) self.failUnless(not repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(repositoryDir1 >= repositoryDir2) self.failUnless(not repositoryDir1 != repositoryDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failUnlessEqual(repositoryDir1, repositoryDir2) self.failUnless(repositoryDir1 == repositoryDir2) self.failUnless(not repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(repositoryDir1 >= repositoryDir2) self.failUnless(not repositoryDir1 != repositoryDir2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(repositoryType="type") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repositoryDir1 = RepositoryDir("other", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_004a(self): """ Test comparison of two differing objects, directoryPath differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(directoryPath="/zippy") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_005(self): """ Test comparison of two differing objects, directoryPath differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/zippy", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(collectMode="incr") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "incr", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(compressMode="gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "bzip2") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) ############################# # TestSubversionConfig class ############################# class TestSubversionConfig(unittest.TestCase): """Tests for the SubversionConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SubversionConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) self.failUnlessEqual(None, subversion.compressMode) self.failUnlessEqual(None, subversion.repositories) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, repositories=None. """ subversion = SubversionConfig("daily", "gzip", None) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(None, subversion.repositories) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no repositories. """ subversion = SubversionConfig("daily", "gzip", []) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual([], subversion.repositories) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one repository. """ repositories = [ Repository(), ] subversion = SubversionConfig("daily", "gzip", repositories) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(repositories, subversion.repositories) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple repositories. """ repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] subversion = SubversionConfig("daily", "gzip", repositories=repositories) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(repositories, subversion.repositories) def testConstructor_006(self): """ Test assignment of collectMode attribute, None value. """ subversion = SubversionConfig(collectMode="daily") self.failUnlessEqual("daily", subversion.collectMode) subversion.collectMode = None self.failUnlessEqual(None, subversion.collectMode) def testConstructor_007(self): """ Test assignment of collectMode attribute, valid value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) subversion.collectMode = "weekly" self.failUnlessEqual("weekly", subversion.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) self.failUnlessAssignRaises(ValueError, subversion, "collectMode", "") self.failUnlessEqual(None, subversion.collectMode) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ subversion = SubversionConfig(compressMode="gzip") self.failUnlessEqual("gzip", subversion.compressMode) subversion.compressMode = None self.failUnlessEqual(None, subversion.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.compressMode) subversion.compressMode = "bzip2" self.failUnlessEqual("bzip2", subversion.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.compressMode) self.failUnlessAssignRaises(ValueError, subversion, "compressMode", "") self.failUnlessEqual(None, subversion.compressMode) def testConstructor_012(self): """ Test assignment of repositories attribute, None value. """ subversion = SubversionConfig(repositories=[]) self.failUnlessEqual([], subversion.repositories) subversion.repositories = None self.failUnlessEqual(None, subversion.repositories) def testConstructor_013(self): """ Test assignment of repositories attribute, [] value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [] self.failUnlessEqual([], subversion.repositories) def testConstructor_014(self): """ Test assignment of repositories attribute, single valid entry. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [ Repository(), ] self.failUnlessEqual([ Repository(), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="daily")) self.failUnlessEqual([ Repository(), Repository(collectMode="daily"), ], subversion.repositories) def testConstructor_015(self): """ Test assignment of repositories attribute, multiple valid entries. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] self.failUnlessEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="incr")) self.failUnlessEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), Repository(collectMode="incr"), ], subversion.repositories) def testConstructor_016(self): """ Test assignment of repositories attribute, single invalid entry (None). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [None, ]) self.failUnlessEqual(None, subversion.repositories) def testConstructor_017(self): """ Test assignment of repositories attribute, single invalid entry (wrong type). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [SubversionConfig(), ]) self.failUnlessEqual(None, subversion.repositories) def testConstructor_018(self): """ Test assignment of repositories attribute, mixed valid and invalid entries. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [Repository(), SubversionConfig(), ]) self.failUnlessEqual(None, subversion.repositories) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ subversion1 = SubversionConfig() subversion2 = SubversionConfig() self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ subversion1 = SubversionConfig("daily", "gzip", None) subversion2 = SubversionConfig("daily", "gzip", None) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ subversion1 = SubversionConfig("daily", "gzip", []) subversion2 = SubversionConfig("daily", "gzip", []) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(collectMode="daily") self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("weekly", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(compressMode="bzip2") self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ subversion1 = SubversionConfig("daily", "bzip2", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_009(self): """ Test comparison of two differing objects, repositories differs (one None, one empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_010(self): """ Test comparison of two differing objects, repositories differs (one None, one not empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_011(self): """ Test comparison of two differing objects, repositories differs (one empty, one not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_012(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_013(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="other"), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="type"), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the subversion configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.subversion) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.subversion) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["subversion.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of subversion attribute, None value. """ config = LocalConfig() config.subversion = None self.failUnlessEqual(None, config.subversion) def testConstructor_005(self): """ Test assignment of subversion attribute, valid value. """ config = LocalConfig() config.subversion = SubversionConfig() self.failUnlessEqual(SubversionConfig(), config.subversion) def testConstructor_006(self): """ Test assignment of subversion attribute, invalid value (not SubversionConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "subversion", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.subversion = SubversionConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, subversion differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, subversion differs. """ config1 = LocalConfig() config1.subversion = SubversionConfig(collectMode="daily") config2 = LocalConfig() config2.subversion = SubversionConfig(collectMode="weekly") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None subversion section. """ config = LocalConfig() config.subversion = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty subversion section. """ config = LocalConfig() config.subversion = SubversionConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty subversion section, repositories=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty subversion section, repositories=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", []) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty subversion section, non-empty repositories, defaults set, no values on repositories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_006(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, no values on repositiories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, both values on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories config.validate() def testValidate_008(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_009(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositories = repositories config.validate() def testValidate_010(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_011(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_012(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode and compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_013(self): """ Test validate on a non-empty subversion section, repositoryDirs=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=None) self.failUnlessRaises(ValueError, config.validate) def testValidate_014(self): """ Test validate on a non-empty subversion section, repositoryDirs=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=[]) self.failUnlessRaises(ValueError, config.validate) def testValidate_015(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, defaults set, no values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_016(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, no values on repositiories. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs self.failUnlessRaises(ValueError, config.validate) def testValidate_017(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, both values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_018(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_020(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_021(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_022(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode and compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["subversion.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.subversion) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.subversion) def testParse_002(self): """ Parse config document with default modes, one repository. """ repositories = [ Repository(repositoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_003(self): """ Parse config document with no default modes, one repository """ repositories = [ Repository(repositoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_004(self): """ Parse config document with default modes, several repositories with various overrides. """ repositories = [] repositories.append(Repository(repositoryPath="/opt/public/svn/one")) repositories.append(Repository(repositoryType="BDB", repositoryPath="/opt/public/svn/two", collectMode="weekly")) repositories.append(Repository(repositoryPath="/opt/public/svn/three", compressMode="bzip2")) repositories.append(Repository(repositoryType="FSFS", repositoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2")) path = self.resources["subversion.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_005(self): """ Parse config document with default modes, one repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_006(self): """ Parse config document with no default modes, one repository """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.6"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_007(self): """ Parse config document with default modes, several repositoryDirs with various overrides. """ repositoryDirs = [] repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/one")) repositoryDirs.append(RepositoryDir(repositoryType="BDB", directoryPath="/opt/public/svn/two", collectMode="weekly", relativeExcludePaths=["software", ])) repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/three", compressMode="bzip2", excludePatterns=[".*software.*", ])) repositoryDirs.append(RepositoryDir(repositoryType="FSFS", directoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2", relativeExcludePaths=["cedar", "banner", ], excludePatterns=[".*software.*", ".*database.*", ])) path = self.resources["subversion.conf.7"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ subversion = SubversionConfig() config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single repository with no optional values. """ repositories = [] repositories.append(Repository(repositoryPath="/path")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_006(self): """ Test with no defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_007(self): """ Test with compressMode set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly")) subversion = SubversionConfig(compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_008(self): """ Test with collectMode set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="gzip")) subversion = SubversionConfig(collectMode="weekly", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_009(self): """ Test with compressMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr", compressMode="gzip")) subversion = SubversionConfig(compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_010(self): """ Test with collectMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="gzip")) subversion = SubversionConfig(collectMode="incr", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_011(self): """ Test with defaults set, multiple repositories with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="bzip2")) subversion = SubversionConfig(collectMode="incr", compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestBDBRepository, 'test'), unittest.makeSuite(TestFSFSRepository, 'test'), unittest.makeSuite(TestRepository, 'test'), unittest.makeSuite(TestRepositoryDir, 'test'), unittest.makeSuite(TestSubversionConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/mysqltests.py0000664000175000017500000011674411415165677022214 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: mysqltests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests MySQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/mysql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mysql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to MySQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MYSQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.mysql import LocalConfig, MysqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mysql.conf.1", "mysql.conf.2", "mysql.conf.3", "mysql.conf.4", "mysql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ######################## # TestMysqlConfig class ######################## class TestMysqlConfig(unittest.TestCase): """Tests for the MysqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MysqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) self.failUnlessEqual(None, mysql.password) self.failUnlessEqual(None, mysql.compressMode) self.failUnlessEqual(False, mysql.all) self.failUnlessEqual(None, mysql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ mysql = MysqlConfig("user", "password", "none", False, None) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("none", mysql.compressMode) self.failUnlessEqual(False, mysql.all) self.failUnlessEqual(None, mysql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ mysql = MysqlConfig("user", "password", "none", True, []) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("none", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([], mysql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ mysql = MysqlConfig("user", "password", "gzip", True, [ "one", ]) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("gzip", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([ "one", ], mysql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ mysql = MysqlConfig("user", "password", "bzip2", True, [ "one", "two", ]) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("bzip2", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([ "one", "two", ], mysql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ mysql = MysqlConfig(user="user") self.failUnlessEqual("user", mysql.user) mysql.user = None self.failUnlessEqual(None, mysql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) mysql.user = "user" self.failUnlessEqual("user", mysql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) self.failUnlessAssignRaises(ValueError, mysql, "user", "") self.failUnlessEqual(None, mysql.user) def testConstructor_009(self): """ Test assignment of password attribute, None value. """ mysql = MysqlConfig(password="password") self.failUnlessEqual("password", mysql.password) mysql.password = None self.failUnlessEqual(None, mysql.password) def testConstructor_010(self): """ Test assignment of password attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.password) mysql.password = "password" self.failUnlessEqual("password", mysql.password) def testConstructor_011(self): """ Test assignment of password attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.password) self.failUnlessAssignRaises(ValueError, mysql, "password", "") self.failUnlessEqual(None, mysql.password) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ mysql = MysqlConfig(compressMode="none") self.failUnlessEqual("none", mysql.compressMode) mysql.compressMode = None self.failUnlessEqual(None, mysql.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) mysql.compressMode = "none" self.failUnlessEqual("none", mysql.compressMode) mysql.compressMode = "gzip" self.failUnlessEqual("gzip", mysql.compressMode) mysql.compressMode = "bzip2" self.failUnlessEqual("bzip2", mysql.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "") self.failUnlessEqual(None, mysql.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "bogus") self.failUnlessEqual(None, mysql.compressMode) def testConstructor_016(self): """ Test assignment of all attribute, None value. """ mysql = MysqlConfig(all=True) self.failUnlessEqual(True, mysql.all) mysql.all = None self.failUnlessEqual(False, mysql.all) def testConstructor_017(self): """ Test assignment of all attribute, valid value (real boolean). """ mysql = MysqlConfig() self.failUnlessEqual(False, mysql.all) mysql.all = True self.failUnlessEqual(True, mysql.all) mysql.all = False self.failUnlessEqual(False, mysql.all) def testConstructor_018(self): """ Test assignment of all attribute, valid value (expression). """ mysql = MysqlConfig() self.failUnlessEqual(False, mysql.all) mysql.all = 0 self.failUnlessEqual(False, mysql.all) mysql.all = [] self.failUnlessEqual(False, mysql.all) mysql.all = None self.failUnlessEqual(False, mysql.all) mysql.all = ['a'] self.failUnlessEqual(True, mysql.all) mysql.all = 3 self.failUnlessEqual(True, mysql.all) def testConstructor_019(self): """ Test assignment of databases attribute, None value. """ mysql = MysqlConfig(databases=[]) self.failUnlessEqual([], mysql.databases) mysql.databases = None self.failUnlessEqual(None, mysql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, [] value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = [] self.failUnlessEqual([], mysql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, single valid entry. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = ["/whatever", ] self.failUnlessEqual(["/whatever", ], mysql.databases) mysql.databases.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], mysql.databases) def testConstructor_022(self): """ Test assignment of databases attribute, multiple valid entries. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], mysql.databases) mysql.databases.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], mysql.databases) def testConstructor_023(self): """ Test assignment of databases attribute, single invalid entry (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["", ]) self.failUnlessEqual(None, mysql.databases) def testConstructor_024(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["good", "", "alsogood", ]) self.failUnlessEqual(None, mysql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mysql1 = MysqlConfig() mysql2 = MysqlConfig() self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ mysql1 = MysqlConfig("user", "password", "gzip", True, None) mysql2 = MysqlConfig("user", "password", "gzip", True, None) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, []) mysql2 = MysqlConfig("user", "password", "bzip2", True, []) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ mysql1 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(user="user") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ mysql1 = MysqlConfig("user1", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user2", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_007(self): """ Test comparison of two differing objects, password differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(password="password") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_008(self): """ Test comparison of two differing objects, password differs. """ mysql1 = MysqlConfig("user", "password1", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password2", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(compressMode="gzip") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_010(self): """ Test comparison of two differing objects, compressMode differs. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_011(self): """ Test comparison of two differing objects, all differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(all=True) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_012(self): """ Test comparison of two differing objects, all differs. """ mysql1 = MysqlConfig("user", "password", "gzip", False, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=[]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=["whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_015(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_016(self): """ Test comparison of two differing objects, databases differs (both not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", "bogus", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) # note: different than standard due to unsorted list self.failUnless(not mysql1 <= mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 > mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 >= mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 != mysql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mysql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.mysql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.mysql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mysql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mysql attribute, None value. """ config = LocalConfig() config.mysql = None self.failUnlessEqual(None, config.mysql) def testConstructor_005(self): """ Test assignment of mysql attribute, valid value. """ config = LocalConfig() config.mysql = MysqlConfig() self.failUnlessEqual(MysqlConfig(), config.mysql) def testConstructor_006(self): """ Test assignment of mysql attribute, invalid value (not MysqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mysql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mysql = MysqlConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mysql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mysql differs. """ config1 = LocalConfig() config1.mysql = MysqlConfig(user="one") config2 = LocalConfig() config2.mysql = MysqlConfig(user="two") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mysql section. """ config = LocalConfig() config.mysql = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mysql section. """ config = LocalConfig() config.mysql = MysqlConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mysql section, all=True, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty mysql section, all=True, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty mysql section, all=True, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, ["whatever", ]) self.failUnlessRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty mysql section, all=False, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty mysql section, all=False, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", False, []) self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mysql section, all=False, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty mysql section, with user=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, None) config.validate() def testValidate_010(self): """ Test validate on a non-empty mysql section, with password=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, None) config.validate() def testValidate_011(self): """ Test validate on a non-empty mysql section, with user=None and password=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mysql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.mysql) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.mysql) def testParse_003(self): """ Parse config document containing only a mysql section, no databases, all=True. """ path = self.resources["mysql.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("none", config.mysql.compressMode) self.failUnlessEqual(True, config.mysql.all) self.failUnlessEqual(None, config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("none", config.mysql.compressMode) self.failIfEqual(None, config.mysql.password) self.failUnlessEqual(True, config.mysql.all) self.failUnlessEqual(None, config.mysql.databases) def testParse_004(self): """ Parse config document containing only a mysql section, single database, all=False. """ path = self.resources["mysql.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("gzip", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("gzip", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database", ], config.mysql.databases) def testParse_005(self): """ Parse config document containing only a mysql section, multiple databases, all=False. """ path = self.resources["mysql.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) def testParse_006(self): """ Parse config document containing only a mysql section, no user or password, multiple databases, all=False. """ path = self.resources["mysql.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual(None, config.mysql.user) self.failUnlessEqual(None, config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual(None, config.mysql.user) self.failUnlessEqual(None, config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_010(self): """ Test with multiple databases, password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_011(self): """ Test with multiple databases, user=None and password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMysqlConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/writersutiltests.py0000664000175000017500000020475211645152363023432 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: writersutiltests.py 1023 2011-10-11 23:44:50Z pronovic $ # Purpose : Tests writer utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in writers/util.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else (most of the private methods are static). Being able to test these methods also makes it easier to gain some reasonable confidence in the code even if some tests are not run because WRITERSUTILTESTS_FULL is not set to "Y" in the environment (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set WRITERSUTILTESTS_FULL to "Y" in the environment. In this module, there are three dependencies: the system must have C{mkisofs} installed, the kernel must allow ISO images to be mounted in-place via a loopback mechanism, and the current user must be allowed (via C{sudo}) to mount and unmount such loopback filesystems. See documentation by the L{TestIsoImage.mountImage} and L{TestIsoImage.unmountImage} methods for more information on what C{sudo} access is required. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile import time from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import platformMacOsX, platformSupportsLinks from CedarBackup2.filesystem import FilesystemList from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed, IsoImage from CedarBackup2.util import executeCommand ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] GCONF_CMD = [ "gconftool-2", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "WRITERSUTILTESTS_FULL" in os.environ: return os.environ["WRITERSUTILTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test validateScsiId() ######################## def testValidateScsiId_001(self): """ Test with simple scsibus,target,lun address. """ scsiId = "0,0,0" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_002(self): """ Test with simple scsibus,target,lun address containing spaces. """ scsiId = " 0, 0, 0 " result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_003(self): """ Test with simple ATA address. """ scsiId = "ATA:3,2,1" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_004(self): """ Test with simple ATA address containing spaces. """ scsiId = "ATA: 3, 2,1 " result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_005(self): """ Test with simple ATAPI address. """ scsiId = "ATAPI:1,2,3" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_006(self): """ Test with simple ATAPI address containing spaces. """ scsiId = " ATAPI:1, 2, 3" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_007(self): """ Test with default-device Mac address. """ scsiId = "IOCompactDiscServices" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_008(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/2" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_009(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/12" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_010(self): """ Test with an invalid address with a missing field. """ scsiId = "1,2" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_011(self): """ Test with an invalid Mac-style address with a backslash. """ scsiId = "IOCompactDiscServices\\3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_012(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATAPI;1,2,3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_013(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATA-1,2,3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_014(self): """ Test with a None SCSI id. """ scsiId = None result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) ############################ # Test validateDriveSpeed() ############################ def testValidateDriveSpeed_001(self): """ Test for a valid drive speed. """ speed = 1 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 2 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 30 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 2.0 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 1.3 result = validateDriveSpeed(speed) self.failUnlessEqual(result, 1) # truncated def testValidateDriveSpeed_002(self): """ Test for a None drive speed (special case). """ speed = None result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) def testValidateDriveSpeed_003(self): """ Test for an invalid drive speed (zero) """ speed = 0 self.failUnlessRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_004(self): """ Test for an invalid drive speed (negative) """ speed = -1 self.failUnlessRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_005(self): """ Test for an invalid drive speed (not integer) """ speed = "ken" self.failUnlessRaises(ValueError, validateDriveSpeed, speed) ##################### # TestIsoImage class ##################### class TestIsoImage(unittest.TestCase): """Tests for the IsoImage class.""" ################ # Setup methods ################ def setUp(self): try: self.disableGnomeAutomount() self.mounted = False self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): if self.mounted: self.unmountImage() removedir(self.tmpdir) self.enableGnomeAutomount() ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def mountImage(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo mount'. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): return self.mountImageDarwin(imagePath) else: return self.mountImageGeneric(imagePath) def mountImageDarwin(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "attach", "-mountpoint", mountPath, imagePath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def mountImageGeneric(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPMOUNT = /bin/mount -d -t iso9660 -o loop * * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "mount", "-t", "iso9660", "-o", "loop", imagePath, mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def unmountImage(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo unmount'. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): self.unmountImageDarwin() else: self.unmountImageGeneric() def unmountImageDarwin(self): """ Unmounts an ISO image from C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "detach", mountPath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def unmountImageGeneric(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. Sometimes, multiple tries are needed because the ISO filesystem is still in use. We try twice with a 1-second pause between attempts. If this isn't successful, you may run out of loopback devices. Check for leftover mounts using 'losetup -a' as root. You can remove a leftover mount using something like 'losetup -d /dev/loop0'. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPUNMOUNT = /bin/umount -d -t iso9660 * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "umount", "-d", "-t", "iso9660", mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: time.sleep(1) (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def disableGnomeAutomount(self): """ Disables GNOME auto-mounting of ISO volumes when full tests are enabled. As of this writing (October 2011), recent versions of GNOME in Debian come pre-configured to auto-mount various kinds of media (like CDs and thumb drives). Besides auto-mounting the media, GNOME also often opens up a Nautilus browser window to explore the newly-mounted media. This causes lots of problems for these unit tests, which assume that they have complete control over the mounting and unmounting process. So, for these tests to work, we need to disable GNOME auto-mounting. """ self.origMediaAutomount = None self.origMediaAutomountOpen = None if runAllTests(): args = [ "--get", "/apps/nautilus/preferences/media_automount", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomount = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "false", ] executeCommand(GCONF_CMD, args) args = [ "--get", "/apps/nautilus/preferences/media_automount_open", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomountOpen = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "false", ] executeCommand(GCONF_CMD, args) def enableGnomeAutomount(self): """ Resets GNOME auto-mounting options back to their state prior to disableGnomeAutomount(). """ if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "true", ] executeCommand(GCONF_CMD, args) if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "true", ] executeCommand(GCONF_CMD, args) ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor using all default arguments. """ isoImage = IsoImage() self.failUnlessEqual(None, isoImage.device) self.failUnlessEqual(None, isoImage.boundaries) self.failUnlessEqual(None, isoImage.graftPoint) self.failUnlessEqual(True, isoImage.useRockRidge) self.failUnlessEqual(None, isoImage.applicationId) self.failUnlessEqual(None, isoImage.biblioFile) self.failUnlessEqual(None, isoImage.publisherId) self.failUnlessEqual(None, isoImage.preparerId) self.failUnlessEqual(None, isoImage.volumeId) def testConstructor_002(self): """ Test the constructor using non-default arguments. """ isoImage = IsoImage("/dev/cdrw", boundaries=(1, 2), graftPoint="/france") self.failUnlessEqual("/dev/cdrw", isoImage.device) self.failUnlessEqual((1, 2), isoImage.boundaries) self.failUnlessEqual("/france", isoImage.graftPoint) self.failUnlessEqual(True, isoImage.useRockRidge) self.failUnlessEqual(None, isoImage.applicationId) self.failUnlessEqual(None, isoImage.biblioFile) self.failUnlessEqual(None, isoImage.publisherId) self.failUnlessEqual(None, isoImage.preparerId) self.failUnlessEqual(None, isoImage.volumeId) ################################ # Test IsoImage utility methods ################################ def testUtilityMethods_001(self): """ Test _buildDirEntries() with an empty entries dictionary. """ entries = {} result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(0, len(result)) def testUtilityMethods_002(self): """ Test _buildDirEntries() with an entries dictionary that has no graft points. """ entries = {} entries["/one/two/three"] = None entries["/four/five/six"] = None entries["/seven/eight/nine"] = None result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("/one/two/three" in result) self.failUnless("/four/five/six" in result) self.failUnless("/seven/eight/nine" in result) def testUtilityMethods_003(self): """ Test _buildDirEntries() with an entries dictionary that has all graft points. """ entries = {} entries["/one/two/three"] = "/backup1" entries["/four/five/six"] = "backup2" entries["/seven/eight/nine"] = "backup3" result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("backup1/=/one/two/three" in result) self.failUnless("backup2/=/four/five/six" in result) self.failUnless("backup3/=/seven/eight/nine" in result) def testUtilityMethods_004(self): """ Test _buildDirEntries() with an entries dictionary that has mixed graft points and not. """ entries = {} entries["/one/two/three"] = "backup1" entries["/four/five/six"] = None entries["/seven/eight/nine"] = "/backup3" result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("backup1/=/one/two/three" in result) self.failUnless("/four/five/six" in result) self.failUnless("backup3/=/seven/eight/nine" in result) def testUtilityMethods_005(self): """ Test _buildGeneralArgs() with all optional values as None. """ isoImage = IsoImage() result = isoImage._buildGeneralArgs() self.failUnlessEqual(0, len(result)) def testUtilityMethods_006(self): """ Test _buildGeneralArgs() with applicationId set. """ isoImage = IsoImage() isoImage.applicationId = "one" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-A", "one", ], result) def testUtilityMethods_007(self): """ Test _buildGeneralArgs() with biblioFile set. """ isoImage = IsoImage() isoImage.biblioFile = "two" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-biblio", "two", ], result) def testUtilityMethods_008(self): """ Test _buildGeneralArgs() with publisherId set. """ isoImage = IsoImage() isoImage.publisherId = "three" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-publisher", "three", ], result) def testUtilityMethods_009(self): """ Test _buildGeneralArgs() with preparerId set. """ isoImage = IsoImage() isoImage.preparerId = "four" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-p", "four", ], result) def testUtilityMethods_010(self): """ Test _buildGeneralArgs() with volumeId set. """ isoImage = IsoImage() isoImage.volumeId = "five" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-V", "five", ], result) def testUtilityMethods_011(self): """ Test _buildSizeArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_012(self): """ Test _buildSizeArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_013(self): """ Test _buildSizeArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "backup1/=/one/two/three", ], result) def testUtilityMethods_014(self): """ Test _buildSizeArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_015(self): """ Test _buildSizeArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_016(self): """ Test _buildSizeArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "-C", "1,2", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) def testUtilityMethods_017(self): """ Test _buildWriteArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_018(self): """ Test _buildWriteArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_019(self): """ Test _buildWriteArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_020(self): """ Test _buildWriteArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_021(self): """ Test _buildWriteArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_022(self): """ Test _buildWriteArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "-C", "3,4", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) ################## # Test addEntry() ################## def testAddEntry_001(self): """ Attempt to add a non-existent entry. """ file1 = self.buildPath([ INVALID_FILE, ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_002(self): """ Attempt to add a an entry that is a soft link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") file1 = self.buildPath([ "tree9", "dir002", "link003", ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_003(self): """ Attempt to add a an entry that is a soft link to a directory """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "link001", ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_004(self): """ Attempt to add a file, no graft point set. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_005(self): """ Attempt to add a file, graft point set on the object level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_006(self): """ Attempt to add a file, graft point set on the method level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_007(self): """ Attempt to add a file, graft point set on the object and method levels. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_008(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset). """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_009(self): """ Attempt to add a directory, no graft point set. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.failUnlessEqual({ dir1:os.path.basename(dir1), }, isoImage.entries) def testAddEntry_010(self): """ Attempt to add a directory, graft point set on the object level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.failUnlessEqual({ dir1:os.path.join("p", "tree9") }, isoImage.entries) def testAddEntry_011(self): """ Attempt to add a directory, graft point set on the method level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s") self.failUnlessEqual({ dir1:os.path.join("s", "tree9"), }, isoImage.entries) def testAddEntry_012(self): """ Attempt to add a file, no graft point set, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_013(self): """ Attempt to add a file, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_014(self): """ Attempt to add a file, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_015(self): """ Attempt to add a file, graft point set on the object and method levels, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_016(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset), contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_017(self): """ Attempt to add a directory, no graft point set, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessEqual({ dir1:None, }, isoImage.entries) def testAddEntry_018(self): """ Attempt to add a directory, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessEqual({ dir1:"p" }, isoImage.entries) def testAddEntry_019(self): """ Attempt to add a directory, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_020(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_021(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_022(self): """ Attempt to add a file that has already been added, override=False. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) self.failUnlessRaises(ValueError, isoImage.addEntry, file1, override=False) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_023(self): """ Attempt to add a file that has already been added, override=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) isoImage.addEntry(file1, override=True) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_024(self): """ Attempt to add a directory that has already been added, override=False, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.failUnlessEqual({ file1:"one", }, isoImage.entries) self.failUnlessRaises(ValueError, isoImage.addEntry, file1, graftPoint="two", override=False) self.failUnlessEqual({ file1:"one", }, isoImage.entries) def testAddEntry_025(self): """ Attempt to add a directory that has already been added, override=True, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.failUnlessEqual({ file1:"one", }, isoImage.entries) isoImage.addEntry(file1, graftPoint="two", override=True) self.failUnlessEqual({ file1:"two", }, isoImage.entries) ########################## # Test getEstimatedSize() ########################## def testGetEstimatedSize_001(self): """ Test with an empty list. """ self.extractTar("tree9") isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.getEstimatedSize) def testGetEstimatedSize_002(self): """ Test with non-empty empty list. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9", ]) isoImage = IsoImage() isoImage.addEntry(dir1, graftPoint="base") result = isoImage.getEstimatedSize() self.failUnless(result > 0) #################### # Test writeImage() #################### def testWriteImage_001(self): """ Attempt to write an image containing no entries. """ isoImage = IsoImage() imagePath = self.buildPath([ "image.iso", ]) self.failUnlessRaises(ValueError, isoImage.writeImage, imagePath) def testWriteImage_002(self): """ Attempt to write an image containing only an empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "dir002") in fsList) def testWriteImage_003(self): """ Attempt to write an image containing only an empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base") in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002") in fsList) def testWriteImage_004(self): """ Attempt to write an image containing only a non-empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(10, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "dir002") in fsList) self.failUnless(os.path.join(mountPath, "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "dir002", ) in fsList) def testWriteImage_005(self): """ Attempt to write an image containing only a non-empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else")) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(12, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002") in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "dir002", ) in fsList) def testWriteImage_006(self): """ Attempt to write an image containing only a file, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_007(self): """ Attempt to write an image containing only a file, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "point", ) in fsList) self.failUnless(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_008(self): """ Attempt to write an image containing a file and an empty directory, no graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_009(self): """ Attempt to write an image containing a file and an empty directory, with graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other") isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(5, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_010(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(11, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_011(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(file2, graftPoint="other") isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(13, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_012(self): """ Attempt to write an image containing a deeply-nested directory. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(24, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "dir002", ) in fsList) def testWriteImage_013(self): """ Attempt to write an image containing only an empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(1, len(fsList)) self.failUnless(mountPath in fsList) def testWriteImage_014(self): """ Attempt to write an image containing only an empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base") in fsList) def testWriteImage_015(self): """ Attempt to write an image containing only a non-empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(9, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_016(self): """ Attempt to write an image containing only a non-empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else"), contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(11, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", ) in fsList) def testWriteImage_017(self): """ Attempt to write an image containing only a file, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_018(self): """ Attempt to write an image containing only a file, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "point", ) in fsList) self.failUnless(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_019(self): """ Attempt to write an image containing a file and an empty directory, no graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_020(self): """ Attempt to write an image containing a file and an empty directory, with graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(4, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file001", ) in fsList) def testWriteImage_021(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessRaises(IOError, isoImage.writeImage, imagePath) # ends up with a duplicate name def testWriteImage_022(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(file2, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(12, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_023(self): """ Attempt to write an image containing a deeply-nested directory, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(23, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "dir002", ) in fsList) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestIsoImage, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestIsoImage, 'testConstructor'), unittest.makeSuite(TestIsoImage, 'testUtilityMethods'), unittest.makeSuite(TestIsoImage, 'testAddEntry'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/customizetests.py0000664000175000017500000002015411415165227023045 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: customizetests.py 998 2010-07-07 19:56:08Z pronovic $ # Purpose : Tests customization functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/customize.py. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.customize import PLATFORM, customizeOverrides from CedarBackup2.config import Config, OptionsConfig, CommandOverride ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ############################ # Test customizeOverrides() ############################ def testCustomizeOverrides_001(self): """ Test platform=standard, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual(None, options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual(None, options.overrides) def testCustomizeOverrides_002(self): """ Test platform=standard, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) def testCustomizeOverrides_003(self): """ Test platform=standard, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_004(self): """ Test platform=standard, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) def testCustomizeOverrides_005(self): """ Test platform=debian, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_006(self): """ Test platform=debian, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_007(self): """ Test platform=debian, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_008(self): """ Test platform=debian, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/filesystemtests.py0000664000175000017500000641065311457444020023221 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: filesystemtests.py 1014 2010-10-20 01:38:23Z pronovic $ # Purpose : Tests filesystem-related classes. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/filesystem.py. Test Notes ========== This module contains individual tests for each of the classes implemented in filesystem.py: FilesystemList, BackupFileList and PurgeItemList. The BackupFileList and PurgeItemList classes inherit from FilesystemList, and the FilesystemList class itself inherits from the standard Python list class. For the most part, I won't spend time testing inherited functionality, especially if it's already been tested. However, I do test some of the base list functionality just to ensure that the inheritence has been constructed properly and everything seems to work as expected. You may look at this code and ask, "Why all of the checks that XXX is in list YYY? Why not just compare what we got to a known list?" The answer is that the order of the list is not significant, only its contents. We can't be positive about the order in which we recurse a directory, but we do need to make sure that everything we expect is in the list and nothing more. We do this by checking the count if items and then making sure that exactly that many known items exist in the list. This file is ridiculously long, almost too long to be worked with easily. I really should split it up into smaller files, but I like having a 1:1 relationship between a module and its test. Windows Platform ================ Unfortunately, some of the expected results for these tests vary on the Windows platform. First, Windows does not support soft links. So, most of the tests around excluding and adding soft links don't really make any sense. Those checks are not executed on the Windows platform. Second, the tar files that are used to generate directory trees on disk are not extracted exactly the same on Windows as on other platforms. Again, the differences are around soft links. On Windows, the Python tar module doesn't extract soft links to directories at all, and soft links to files are extracted as real files containing the content of the link target. This means that the expected directory listings differ, and so do the total sizes of the extracted directories. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality. Instead, I create lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_023}. Each method then has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge the extent of a problem when one exists. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a FILESYSTEMTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import sys import os import unittest import tempfile import tarfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, changeFileAge, randomFilename from CedarBackup2.testutil import platformMacOsX, platformWindows, platformCygwin from CedarBackup2.testutil import platformSupportsLinks, platformRequiresBinaryRead from CedarBackup2.testutil import failUnlessAssignRaises from CedarBackup2.filesystem import FilesystemList, BackupFileList, PurgeItemList, normalizeDir, compareContents ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree3.tar.gz", "tree4.tar.gz", "tree5.tar.gz", "tree6.tar.gz", "tree7.tar.gz", "tree8.tar.gz", "tree9.tar.gz", "tree10.tar.gz", "tree11.tar.gz", "tree12.tar.gz", "tree13.tar.gz", "tree22.tar.gz", ] INVALID_FILE = "bogus" # This file name should never exist NOMATCH_PATH = "/something" # This path should never match something we put in a file list NOMATCH_BASENAME = "something" # This basename should never match something we put in a file list NOMATCH_PATTERN = "pattern" # This pattern should never match something we put in a file list AGE_1_HOUR = 1*60*60 # in seconds AGE_2_HOURS = 2*60*60 # in seconds AGE_12_HOURS = 12*60*60 # in seconds AGE_23_HOURS = 23*60*60 # in seconds AGE_24_HOURS = 24*60*60 # in seconds AGE_25_HOURS = 25*60*60 # in seconds AGE_47_HOURS = 47*60*60 # in seconds AGE_48_HOURS = 48*60*60 # in seconds AGE_49_HOURS = 49*60*60 # in seconds ####################################################################### # Test Case Classes ####################################################################### ########################### # TestFilesystemList class ########################### class TestFilesystemList(unittest.TestCase): """Tests for the FilesystemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test attribute assignment ############################ def testAssignment_001(self): """ Test assignment of excludeFiles attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = True self.failUnlessEqual(True, fsList.excludeFiles) fsList.excludeFiles = [ 1, ] self.failUnlessEqual(True, fsList.excludeFiles) def testAssignment_002(self): """ Test assignment of excludeFiles attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = False self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = [ ] self.failUnlessEqual(False, fsList.excludeFiles) def testAssignment_003(self): """ Test assignment of excludeLinks attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = True self.failUnlessEqual(True, fsList.excludeLinks) fsList.excludeLinks = [ 1, ] self.failUnlessEqual(True, fsList.excludeLinks) def testAssignment_004(self): """ Test assignment of excludeLinks attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = False self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = [ ] self.failUnlessEqual(False, fsList.excludeLinks) def testAssignment_005(self): """ Test assignment of excludeDirs attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = True self.failUnlessEqual(True, fsList.excludeDirs) fsList.excludeDirs = [ 1, ] self.failUnlessEqual(True, fsList.excludeDirs) def testAssignment_006(self): """ Test assignment of excludeDirs attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = False self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = [ ] self.failUnlessEqual(False, fsList.excludeDirs) def testAssignment_007(self): """ Test assignment of ignoreFile attribute. """ fsList = FilesystemList() self.failUnlessEqual(None, fsList.ignoreFile) fsList.ignoreFile = "ken" self.failUnlessEqual("ken", fsList.ignoreFile) fsList.ignoreFile = None self.failUnlessEqual(None, fsList.ignoreFile) def testAssignment_008(self): """ Test assignment of excludePaths attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludePaths) fsList.excludePaths = None self.failUnlessEqual([], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", "/path/to/something/else", ] self.failUnlessEqual([ "/path/to/something/absolute", "/path/to/something/else", ], fsList.excludePaths) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", ["path/to/something/relative", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", [ "/path/to/something/absolute", "path/to/something/relative", ]) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.insert(0, "/ken") self.failUnlessEqual([ "/ken", "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.append("/file") self.failUnlessEqual([ "/ken", "/path/to/something/absolute", "/file", ], fsList.excludePaths) fsList.excludePaths.extend(["/one", "/two", ]) self.failUnlessEqual([ "/ken", "/path/to/something/absolute", "/file", "/one", "/two", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessRaises(ValueError, fsList.excludePaths.insert, 0, "path/to/something/relative") self.failUnlessRaises(ValueError, fsList.excludePaths.append, "path/to/something/relative") self.failUnlessRaises(ValueError, fsList.excludePaths.extend, ["path/to/something/relative", ]) def testAssignment_009(self): """ Test assignment of excludePatterns attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludePatterns) fsList.excludePatterns = None self.failUnlessEqual([], fsList.excludePatterns) fsList.excludePatterns = [ ".*\.jpg", ] self.failUnlessEqual([ ".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns = [ ".*\.jpg", "[a-zA-Z0-9]*", ] self.failUnlessEqual([ ".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludePatterns = [ ".*\.jpg", ] self.failUnlessEqual([ ".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.insert(0, "ken") self.failUnlessEqual([ "ken", ".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.append("pattern") self.failUnlessEqual([ "ken", ".*\.jpg", "pattern", ], fsList.excludePatterns) fsList.excludePatterns.extend(["one", "two", ]) self.failUnlessEqual([ "ken", ".*\.jpg", "pattern", "one", "two", ], fsList.excludePatterns) fsList.excludePatterns = [ ".*\.jpg", ] self.failUnlessRaises(ValueError, fsList.excludePatterns.insert, 0, "*.jpg") self.failUnlessEqual([ ".*\.jpg", ], fsList.excludePatterns) self.failUnlessRaises(ValueError, fsList.excludePatterns.append, "*.jpg") self.failUnlessEqual([ ".*\.jpg", ], fsList.excludePatterns) self.failUnlessRaises(ValueError, fsList.excludePatterns.extend, ["*.jpg", ]) self.failUnlessEqual([ ".*\.jpg", ], fsList.excludePatterns) def testAssignment_010(self): """ Test assignment of excludeBasenamePatterns attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = None self.failUnlessEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ ".*\.jpg", ] self.failUnlessEqual([ ".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ ".*\.jpg", "[a-zA-Z0-9]*", ] self.failUnlessEqual([ ".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludeBasenamePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludeBasenamePatterns = [ ".*\.jpg", ] self.failUnlessEqual([ ".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.insert(0, "ken") self.failUnlessEqual([ "ken", ".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.append("pattern") self.failUnlessEqual([ "ken", ".*\.jpg", "pattern", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.extend(["one", "two", ]) self.failUnlessEqual([ "ken", ".*\.jpg", "pattern", "one", "two", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ ".*\.jpg", ] self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.insert, 0, "*.jpg") self.failUnlessEqual([ ".*\.jpg", ], fsList.excludeBasenamePatterns) self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.append, "*.jpg") self.failUnlessEqual([ ".*\.jpg", ], fsList.excludeBasenamePatterns) self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.extend, ["*.jpg", ]) self.failUnlessEqual([ ".*\.jpg", ], fsList.excludeBasenamePatterns) ################################ # Test basic list functionality ################################ def testBasic_001(self): """ Test the append() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') self.failUnlessEqual(['a'], fsList) fsList.append('b') self.failUnlessEqual(['a', 'b'], fsList) def testBasic_002(self): """ Test the insert() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.insert(0, 'a') self.failUnlessEqual(['a'], fsList) fsList.insert(0, 'b') self.failUnlessEqual(['b', 'a'], fsList) def testBasic_003(self): """ Test the remove() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.insert(0, 'a') fsList.insert(0, 'b') self.failUnlessEqual(['b', 'a'], fsList) fsList.remove('a') self.failUnlessEqual(['b'], fsList) fsList.remove('b') self.failUnlessEqual([], fsList) def testBasic_004(self): """ Test the pop() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual('e', fsList.pop()) self.failUnlessEqual(['a', 'b', 'c', 'd'], fsList) self.failUnlessEqual('d', fsList.pop()) self.failUnlessEqual(['a', 'b', 'c'], fsList) self.failUnlessEqual('c', fsList.pop()) self.failUnlessEqual(['a', 'b'], fsList) self.failUnlessEqual('b', fsList.pop()) self.failUnlessEqual(['a'], fsList) self.failUnlessEqual('a', fsList.pop()) self.failUnlessEqual([], fsList) def testBasic_005(self): """ Test the count() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual(1, fsList.count('a')) def testBasic_006(self): """ Test the index() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual(2, fsList.index('c')) def testBasic_007(self): """ Test the reverse() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.reverse() self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.reverse() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_008(self): """ Test the sort() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.sort() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.sort() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_009(self): """ Test slicing. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList[:]) self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList[0:]) self.failUnlessEqual('e', fsList[0]) self.failUnlessEqual('a', fsList[4]) self.failUnlessEqual(['d', 'c', 'b'], fsList[1:4]) ################# # Test addFile() ################# def testAddFile_001(self): """ Attempt to add a file that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_002(self): """ Attempt to add a directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_004(self): """ Attempt to add an existing file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_005(self): """ Attempt to add a file that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_006(self): """ Attempt to add a directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_007(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_008(self): """ Attempt to add an existing file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_009(self): """ Attempt to add a file that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_010(self): """ Attempt to add a directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_011(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_012(self): """ Attempt to add an existing file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_013(self): """ Attempt to add a file that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_014(self): """ Attempt to add a directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_015(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_016(self): """ Attempt to add an existing file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_017(self): """ Attempt to add a file that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_018(self): """ Attempt to add a directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_020(self): """ Attempt to add an existing file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_021(self): """ Attempt to add a file that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_022(self): """ Attempt to add a directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_024(self): """ Attempt to add an existing file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_025(self): """ Attempt to add a file that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_026(self): """ Attempt to add a directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_028(self): """ Attempt to add an existing file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_029(self): """ Attempt to add a file that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_030(self): """ Attempt to add a directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_032(self): """ Attempt to add an existing file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_034(self): """ Attempt to add a file that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "file with spaces"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_035(self): """ Attempt to add a UTF-8 file. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_036(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_037(self): """ Attempt to add a directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_038(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_039(self): """ Attempt to add an existing file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_040(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_041(self): """ Attempt to add a directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_042(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_043(self): """ Attempt to add an existing file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) ################ # Test addDir() ################ def testAddDir_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_004(self): """ Attempt to add an existing directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_005(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_006(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_007(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_008(self): """ Attempt to add an existing directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_009(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_010(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_011(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_012(self): """ Attempt to add an existing directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_013(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_014(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_015(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_016(self): """ Attempt to add an existing directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_017(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_018(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_020(self): """ Attempt to add an existing directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_021(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_022(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_024(self): """ Attempt to add an existing directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_025(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_026(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_028(self): """ Attempt to add an existing directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_029(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_030(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_032(self): """ Attempt to add an existing directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_034(self): """ Attempt to add a directory that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces"]) fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_035(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_036(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_037(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_038(self): """ Attempt to add an existing directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_039(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_040(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_041(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_042(self): """ Attempt to add an existing directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnless(self.buildPath(["tree8", "dir001", ]) in fsList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(136, count) self.failUnlessEqual(136, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(28, count) self.failUnlessEqual(28, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) else: self.failUnlessEqual(42, count) self.failUnlessEqual(42, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(96, count) self.failUnlessEqual(96, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(112, count) self.failUnlessEqual(112, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(125, count) self.failUnlessEqual(125, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() if platformWindows(): fsList.excludePatterns = [ ".*file001.*", r".*tree6\\dir002\\dir001.*" ] else: fsList.excludePatterns = [ ".*file001.*", ".*tree6\/dir002\/dir001.*" ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(95, count) self.failUnlessEqual(95, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(108, count) self.failUnlessEqual(108, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(70, count) self.failUnlessEqual(70, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(79, count) self.failUnlessEqual(79, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(9, count) self.failUnlessEqual(9, len(fsList)) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree10", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(6, count) self.failUnlessEqual(6, len(fsList)) self.failUnless(self.buildPath([ "tree12", "unicode", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac", ]) in fsList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) and Windows because the tarball isn't valid on those platforms. """ if not platformMacOsX() and not platformWindows(): self.extractTar("tree13") path = self.buildPath(["tree13", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree13", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Les mouvements de r\x82forme.doc", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l'\x82nonc\x82.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard - renvois et bibliographie.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard copie finale.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci - page titre.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Rammstein - B\x81ck Dich.mp3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "megaherz - Glas Und Tr\x84nen.mp3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Megaherz - Mistst\x81ck.MP3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Rammstein - Mutter - B\x94se.mp3", ]) in fsList) def testAddDirContents_073(self): """ Attempt to add a large tree with recursive=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, recursive=False) if not platformSupportsLinks(): self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_074(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_076(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_077(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_078(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir008", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_080(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_081(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_083(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_084(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_085(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_087(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", "dir001" ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(55, count) self.failUnlessEqual(55, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(64, count) self.failUnlessEqual(64, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions, addSelf=True. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=True) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(136, count) self.failUnlessEqual(136, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_090(self): """ Attempt to add a large tree with no exclusions, addSelf=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=False) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(165, count) self.failUnlessEqual(165, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6" ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(241, count) self.failUnlessEqual(241, len(fsList)) self.failUnless(self.buildPath([ "tree6" ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in fsList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005" ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(32, count) self.failUnlessEqual(32, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) def testAddDirContents_100(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(35, count) self.failUnlessEqual(35, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir007", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir008", "file001", ]) in fsList) def testAddDirContents_101(self): """ Attempt to add a soft link; excludeFiles and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_102(self): """ Attempt to add a soft link; excludeDirs and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_103(self): """ Attempt to add a soft link; excludeLinks and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_104(self): """ Attempt to add a soft link; with excludePaths including the path, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_105(self): """ Attempt to add a soft link; with excludePatterns matching the path, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_106(self): """ Attempt to add a link to a file, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_107(self): """ Attempt to add a link to a directory (which should add its contents), with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_108(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_109(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree10", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_110(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path, and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) ##################### # Test removeFiles() ##################### def testRemoveFiles_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeFiles(pattern=None) self.failUnlessEqual(0, count) def testRemoveFiles_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeFiles(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeFiles, pattern="*.jpg") def testRemoveFiles_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(7, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(44, count) self.failUnlessEqual(37, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(10, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(44, count) self.failUnlessEqual(38, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveFiles_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree1.*file00[67]") self.failUnlessEqual(2, count) self.failUnlessEqual(6, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) def testRemoveFiles_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*tree2.*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree4.*dir006.*") self.failUnlessEqual(10, count) self.failUnlessEqual(71, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*dir001.*file002.*") self.failUnlessEqual(1, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=".*with spaces.*") self.failUnlessEqual(6, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*with spaces.*") self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) def testRemoveFiles_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches anything. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(7, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches anything. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches anything. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(44, count) self.failUnlessEqual(37, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(10, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(10, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(44, count) self.failUnlessEqual(38, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) #################### # Test removeDirs() #################### def testRemoveDirs_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeDirs(pattern=None) self.failUnlessEqual(0, count) def testRemoveDirs_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeDirs(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeDirs, pattern="*.jpg") def testRemoveDirs_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(1, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveDirs_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(37, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(7, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(12, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(37, count) self.failUnlessEqual(45, len(fsList)) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(3, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(5, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveDirs_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*tree1.file00[67]") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*dir0[012]0") self.failUnlessEqual(1, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) def testRemoveDirs_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.failUnlessEqual(9, count) self.failUnlessEqual(72, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*tree9.*dir002.*") self.failUnlessEqual(6, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.failUnlessEqual(9, count) self.failUnlessEqual(73, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=".*with spaces.*") self.failUnlessEqual(1, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*with spaces.*") self.failUnlessEqual(1, count) self.failUnlessEqual(15, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(1, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveDirs_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(37, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(7, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(12, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(37, count) self.failUnlessEqual(45, len(fsList)) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(3, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(5, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ##################### # Test removeLinks() ##################### def testRemoveLinks_001(self): """ Test with an empty list and a pattern of None. """ if platformSupportsLinks(): fsList = FilesystemList() count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) def testRemoveLinks_002(self): """ Test with an empty list and a non-empty pattern. """ if platformSupportsLinks(): fsList = FilesystemList() count = fsList.removeLinks(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeLinks, pattern="*.jpg") def testRemoveLinks_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(9, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveLinks_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree1.*file007") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*tree2.*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*with spaces.*") self.failUnlessEqual(3, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(9, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) ##################### # Test removeMatch() ##################### def testRemoveMatch_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() self.failUnlessRaises(TypeError, fsList.removeMatch, pattern=None) def testRemoveMatch_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeMatch(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeMatch, pattern="*.jpg") def testRemoveMatch_003(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_004(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_005(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_006(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_008(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveMatch_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*file00[135].*") self.failUnlessEqual(3, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[2468].*") self.failUnlessEqual(4, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*tree4.*dir006") self.failUnlessEqual(18, count) self.failUnlessEqual(63, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=".*file001.*") self.failUnlessEqual(3, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*file001.*") self.failUnlessEqual(3, count) self.failUnlessEqual(19, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[46].*") self.failUnlessEqual(25, count) self.failUnlessEqual(57, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=".*with spaces.*") self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*with spaces.*") self.failUnlessEqual(7, count) self.failUnlessEqual(9, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) def testRemoveMatch_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(8, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(81, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_019(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(17, count) self.failUnlessEqual(0, len(fsList)) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(22, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_020(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(82, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_021(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(14, count) self.failUnlessEqual(0, len(fsList)) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(16, count) self.failUnlessEqual(0, len(fsList)) ####################### # Test removeInvalid() ####################### def testRemoveInvalid_001(self): """ Test with an empty list. """ fsList = FilesystemList() count = fsList.removeInvalid() self.failUnlessEqual(0, count) def testRemoveInvalid_002(self): """ Test with a non-empty list containing only invalid entries (some with spaces). """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(5, count) self.failUnlessEqual(0, len(fsList)) def testRemoveInvalid_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveInvalid_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveInvalid_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveInvalid_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(4, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(4, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_008(self): """ Test with a non-empty list containing only valid entries (files, directories and links, some with spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ################### # Test normalize() ################### def testNormalize_001(self): """ Test with an empty list. """ fsList = FilesystemList() self.failUnlessEqual(0, len(fsList)) fsList.normalize() self.failUnlessEqual(0, len(fsList)) def testNormalize_002(self): """ Test with a list containing one entry. """ fsList = FilesystemList() fsList.append("one") self.failUnlessEqual(1, len(fsList)) fsList.normalize() self.failUnlessEqual(1, len(fsList)) self.failUnless("one" in fsList) def testNormalize_003(self): """ Test with a list containing two entries, no duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("two") self.failUnlessEqual(2, len(fsList)) fsList.normalize() self.failUnlessEqual(2, len(fsList)) self.failUnless("one" in fsList) self.failUnless("two" in fsList) def testNormalize_004(self): """ Test with a list containing two entries, with duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("one") self.failUnlessEqual(2, len(fsList)) fsList.normalize() self.failUnlessEqual(1, len(fsList)) self.failUnless("one" in fsList) def testNormalize_005(self): """ Test with a list containing many entries, no duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) fsList.normalize() self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testNormalize_006(self): """ Test with a list containing many entries, with duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) count = fsList.addDirContents(path) self.failUnlessEqual(17, count) self.failUnlessEqual(34, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) fsList.normalize() self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ################ # Test verify() ################ def testVerify_001(self): """ Test with an empty list. """ fsList = FilesystemList() ok = fsList.verify() self.failUnlessEqual(True, ok) def testVerify_002(self): """ Test with a non-empty list containing only invalid entries. """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) def testVerify_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testVerify_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testVerify_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testVerify_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_008(self): """ Test with a non-empty list containing valid and invalid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ########################### # TestBackupFileList class ########################### class TestBackupFileList(unittest.TestCase): """Tests for the BackupFileList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def tarPath(self, components): """Builds a complete search path from a list of components, compatible with Python tar output.""" if platformWindows(): return self.buildPath(components)[3:].replace("\\", "/") else: result = self.buildPath(components) if result[0:1] == os.path.sep: return result[1:] return result def buildRandomPath(self, maxlength, extension): """Builds a complete, randomly-named search path.""" maxlength -= len(self.tmpdir) maxlength -= len(extension) components = [ self.tmpdir, randomFilename(maxlength, suffix=extension), ] return buildPath(components) ################ # Test addDir() ################ def testAddDir_001(self): """ Test that function is overridden, no exclusions. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_002(self): """ Test that function is overridden, excludeFiles set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeFiles = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_003(self): """ Test that function is overridden, excludeDirs set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeDirs = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testAddDir_004(self): """ Test that function is overridden, excludeLinks set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ if platformSupportsLinks(): self.extractTar("tree5") backupList = BackupFileList() backupList.excludeLinks = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testAddDir_005(self): """ Test that function is overridden, excludePaths set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePaths = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_006(self): """ Test that function is overridden, excludePatterns set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePatterns = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) ################### # Test totalSize() ################### def testTotalSize_001(self): """ Test on an empty list. """ backupList = BackupFileList() size = backupList.totalSize() self.failUnlessEqual(0, size) def testTotalSize_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) def testTotalSize_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1705, size) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1085, size) def testTotalSize_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) def testTotalSize_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) ######################### # Test generateSizeMap() ######################### def testGenerateSizeMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() sizeMap = backupList.generateSizeMap() self.failUnlessEqual(0, len(sizeMap)) def testGenerateSizeMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(11, len(sizeMap)) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(13, len(sizeMap)) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link001", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link002", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link003", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) def testGenerateSizeMap_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) ########################### # Test generateDigestMap() ########################### def testGenerateDigestMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() digestMap = backupList.generateDigestMap() self.failUnlessEqual(0, len(digestMap)) def testGenerateDigestMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(11, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(7, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_004(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_005(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_006(self): """ Test on an empty list, passing stripPrefix not None. """ backupList = BackupFileList() prefix = "whatever" digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(0, len(digestMap)) def testGenerateDigestMap_007(self): """ Test on a non-empty list containing only valid entries, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_008(self): """ Test on a non-empty list containing only valid entries (some containing spaces), passing stripPrefix not None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree11", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(11, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "link002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree11", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(7, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_009(self): """ Test on a non-empty list containing a directory (which shouldn't be possible), passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_010(self): """ Test on a non-empty list containing a non-existent file, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) ######################## # Test generateFitted() ######################## def testGenerateFitted_001(self): """ Test on an empty list. """ backupList = BackupFileList() fittedList = backupList.generateFitted(2000) self.failUnlessEqual(0, len(fittedList)) def testGenerateFitted_002(self): """ Test on a non-empty list containing only valid entries, all of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces), all of which fit. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(11, len(fittedList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fittedList) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(13, len(fittedList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fittedList) def testGenerateFitted_004(self): """ Test on a non-empty list containing only valid entries, some of which fit. We can get some strange behavior on Windows, which hits the "links not supported" case. The file tree9/dir002/file002 is 74 bytes, and is supposed to be the only file included because links are not recognized. However, link004 points at file002, and apparently Windows (sometimes?) sees link004 as a real file with a size of 74 bytes. Since only one of the two fits in the fitted list, we just check for one or the other. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(80) self.failUnlessEqual(1, len(fittedList)) self.failUnless((self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) or (self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(80) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_005(self): """ Test on a non-empty list containing only valid entries, none of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(0) self.failUnlessEqual(0, len(fittedList)) fittedList = backupList.generateFitted(50) self.failUnlessEqual(0, len(fittedList)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(0) self.failUnlessEqual(0, len(fittedList)) fittedList = backupList.generateFitted(50) self.failUnlessEqual(9, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_006(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_007(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) ###################### # Test generateSpan() ###################### def testGenerateSpan_001(self): """ Test on an empty list. """ backupList = BackupFileList() spanSet = backupList.generateSpan(2000) self.failUnlessEqual(0, len(spanSet)) def testGenerateSpan_002(self): """ Test a set of files that all fit in one span item. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(2000) self.failUnlessEqual(1, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(15, len(spanItem.fileList)) self.failUnlessEqual(1116, spanItem.size) self.failUnlessEqual(2000, spanItem.capacity) self.failUnlessEqual((1116.0/2000.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) def testGenerateSpan_003(self): """ Test a set of files that all fit in two span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(760, "best_fit") self.failUnlessEqual(2, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(12, len(spanItem.fileList)) self.failUnlessEqual(753, spanItem.size) self.failUnlessEqual(760, spanItem.capacity) self.failUnlessEqual((753.0/760.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.failUnlessEqual(3, len(spanItem.fileList)) self.failUnlessEqual(363, spanItem.size) self.failUnlessEqual(760, spanItem.capacity) self.failUnlessEqual((363.0/760.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) def testGenerateSpan_004(self): """ Test a set of files that all fit in three span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(515, "best_fit") self.failUnlessEqual(3, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(11, len(spanItem.fileList)) self.failUnlessEqual(511, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((511.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.failUnlessEqual(3, len(spanItem.fileList)) self.failUnlessEqual(471, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((471.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) spanItem = spanSet[2] self.failUnlessEqual(1, len(spanItem.fileList)) self.failUnlessEqual(134, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((134.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) def testGenerateSpan_005(self): """ Test a set of files where one of the files does not fit in the capacity. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessRaises(ValueError, backupList.generateSpan, 250, "best_fit") ######################### # Test generateTarfile() ######################### def testGenerateTarfile_001(self): """ Test on an empty list. """ backupList = BackupFileList() tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath) self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_002(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(11, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001/" ]) in tarList or self.tarPath([ "tree9", "dir001//" ]) in tarList # Grr... Python 2.5 behavior differs or self.tarPath([ "tree9", "dir001", ]) in tarList) # Grr... Python 2.6 behavior differs self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(16, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001/" ]) in tarList or self.tarPath([ "tree9", "dir001//" ]) in tarList # Grr... Python 2.5 behavior differs or self.tarPath([ "tree9", "dir001", ]) in tarList) # Grr... Python 2.6 behavior differs self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_003(self): """ Test on a non-empty list containing a non-existent file, ignore=False. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(tarfile.TarError, backupList.generateTarfile, tarPath, ignore=False) self.failUnless(not os.path.exists(tarPath)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(tarfile.TarError, backupList.generateTarfile, tarPath, ignore=False) self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_004(self): """ Test on a non-empty list containing a non-existent file, ignore=True. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, ignore=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, ignore=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_005(self): """ Test on a non-empty list containing only valid entries, with an invalid mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath, mode="bogus") self.failUnless(not os.path.exists(tarPath)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath, mode="bogus") self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_006(self): """ Test on a non-empty list containing only valid entries, default mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_007(self): """ Test on a non-empty list (some containing spaces), default mode. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(11, len(tarList)) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link001", ]) in tarList) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(13, len(tarList)) self.failUnless(self.tarPath([ "tree11", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link with spaces", ]) in tarList) def testGenerateTarfile_008(self): """ Test on a non-empty list containing only valid entries, 'tar' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_009(self): """ Test on a non-empty list containing only valid entries, 'targz' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar.gz", ]) backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.gz", ]) backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_010(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar.bz2", ]) backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.bz2", ]) backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_011(self): """ Test on a non-empty list containing only valid entries, 'tar' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tar") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) if platformCygwin(): tarPath = self.buildRandomPath(255, ".tar") # Cygwin inherits the Windows 255-char limit else: tarPath = self.buildRandomPath(260, ".tar") backupList.generateTarfile(tarPath, mode="tar") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_012(self): """ Test on a non-empty list containing only valid entries, 'targz' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar.gz") backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) if platformCygwin(): tarPath = self.buildRandomPath(255, ".tar") # Cygwin inherits the Windows 255-char limit else: tarPath = self.buildRandomPath(260, ".tar") backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_013(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar.bz2") backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) if platformCygwin(): tarPath = self.buildRandomPath(255, ".tar") # Cygwin inherits the Windows 255-char limit else: tarPath = self.buildRandomPath(260, ".tar") backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_014(self): """ Test behavior of the flat flag. """ self.extractTar("tree11") backupList = BackupFileList() path = self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir with spaces", "file001", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file002", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file003", ]) backupList.addFile(path) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, flat=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(4, len(tarList)) self.failUnless("file with spaces" in tarList) self.failUnless("file001" in tarList) self.failUnless("file002" in tarList) self.failUnless("file003" in tarList) ######################### # Test removeUnchanged() ######################### def testRemoveUnchanged_001(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testRemoveUnchanged_002(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testRemoveUnchanged_003(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_004(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_005(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_006(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(9, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_007(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_008(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(7, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(12, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_009(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(8, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_010(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) self.failUnlessEqual(0, len(newDigest)) def testRemoveUnchanged_011(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) self.failUnlessEqual(0, len(newDigest)) def testRemoveUnchanged_012(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_013(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_014(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_015(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(9, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_016(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_017(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(7, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(12, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_018(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(8, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) ######################### # Test _generateDigest() ######################### def testGenerateDigest_001(self): """ Test that _generateDigest gives back same result as the slower simplistic implementation for a set of files (just using all of the resource files). """ for key in self.resources.keys(): path = self.resources[key] if platformRequiresBinaryRead(): try: import hashlib digest1 = hashlib.sha1(open(path, mode="rb").read()).hexdigest() except ImportError: import sha digest1 = sha.new(open(path, mode="rb").read()).hexdigest() else: try: import hashlib digest1 = hashlib.sha1(open(path).read()).hexdigest() except ImportError: import sha digest1 = sha.new(open(path).read()).hexdigest() digest2 = BackupFileList._generateDigest(path) self.failUnlessEqual(digest1, digest2, "Digest for %s varies: [%s] vs [%s]." % (path, digest1, digest2)) ########################## # TestPurgeItemList class ########################## class TestPurgeItemList(unittest.TestCase): """Tests for the PurgeItemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() count = purgeList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], purgeList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(6, count) self.failUnlessEqual(6, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(27, count) self.failUnlessEqual(27, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) else: self.failUnlessEqual(41, count) self.failUnlessEqual(41, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(95, count) self.failUnlessEqual(95, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(111, count) self.failUnlessEqual(111, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(124, count) self.failUnlessEqual(124, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() if platformWindows(): purgeList.excludePatterns = [ ".*file001.*", r".*tree6\\dir002\\dir001.*" ] else: purgeList.excludePatterns = [ ".*file001.*", ".*tree6\/dir002\/dir001.*" ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(107, count) self.failUnlessEqual(107, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(69, count) self.failUnlessEqual(69, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(78, count) self.failUnlessEqual(78, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in purgeList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(2, count) self.failUnlessEqual(2, len(purgeList)) self.failUnless(self.buildPath([ "tree10", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in purgeList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in purgeList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(purgeList)) self.failUnless(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac", ]) in purgeList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) because the tarball isn't valid there. """ if not (platformMacOsX() and sys.getfilesystemencoding() == "utf-8"): self.extractTar("tree13") path = self.buildPath(["tree13", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(purgeList)) self.failUnless(self.buildPath([ "tree13", "Les mouvements de r\x82forme.doc", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l'\x82nonc\x82.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard - renvois et bibliographie.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard copie finale.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci - page titre.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Rammstein - B\x81ck Dich.mp3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "megaherz - Glas Und Tr\x84nen.mp3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Megaherz - Mistst\x81ck.MP3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Rammstein - Mutter - B\x94se.mp3", ]) in purgeList) def testAddDirContents_073(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_074(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_076(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_077(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_078(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir008", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_080(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_081(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_083(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_084(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_085(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_087(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001", ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(54, count) self.failUnlessEqual(54, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(63, count) self.failUnlessEqual(63, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001" ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(54, count) self.failUnlessEqual(54, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(63, count) self.failUnlessEqual(63, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_090(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(164, count) self.failUnlessEqual(164, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(240, count) self.failUnlessEqual(240, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in purgeList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005" ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(31, count) self.failUnlessEqual(31, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(34, count) self.failUnlessEqual(34, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir007", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir007", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir008", "file001", ]) in purgeList) #################### # Test removeAged() #################### def testRemoveYoungFiles_001(self): """ Test on an empty list, daysOld < 0. """ daysOld = -1 purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_002(self): """ Test on a non-empty list, daysOld < 0. """ daysOld = -1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) self.failUnlessRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_003(self): """ Test on an empty list, daysOld = 0 """ daysOld = 0 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_004(self): """ Test on a non-empty list containing only directories, daysOld = 0. """ daysOld = 0 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_005(self): """ Test on a non-empty list containing only links, daysOld = 0. """ if platformSupportsLinks(): daysOld = 0 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_006(self): """ Test on a non-empty list containing only non-existent files, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_007(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_008(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_009(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_010(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_011(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_012(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_013(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_014(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_015(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_016(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_017(self): """ Test on an empty list, daysOld = 1 """ daysOld = 1 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_018(self): """ Test on a non-empty list containing only directories, daysOld = 1. """ daysOld = 1 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_019(self): """ Test on a non-empty list containing only links, daysOld = 1. """ if platformSupportsLinks(): daysOld = 1 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_020(self): """ Test on a non-empty list containing only non-existent files, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_021(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_022(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_023(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_024(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_025(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_026(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(2, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_027(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_028(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_029(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_030(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_031(self): """ Test on an empty list, daysOld = 2 """ daysOld = 2 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_032(self): """ Test on a non-empty list containing only directories, daysOld = 2. """ daysOld = 2 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_033(self): """ Test on a non-empty list containing only links, daysOld = 2. """ if platformSupportsLinks(): daysOld = 2 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_034(self): """ Test on a non-empty list containing only non-existent files, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_035(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_036(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_037(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_038(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_039(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_040(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_041(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_042(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_043(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_044(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_045(self): """ Test on an empty list, daysOld = 3 """ daysOld = 3 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_046(self): """ Test on a non-empty list containing only directories, daysOld = 3. """ daysOld = 3 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_047(self): """ Test on a non-empty list containing only links, daysOld = 3. """ if platformSupportsLinks(): daysOld = 3 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_048(self): """ Test on a non-empty list containing only non-existent files, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_049(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_050(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_051(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_052(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_053(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_054(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_055(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_056(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_057(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_058(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) #################### # Test purgeItems() #################### def testPurgeItems_001(self): """ Test with an empty list. """ purgeList = PurgeItemList() (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) def testPurgeItems_002(self): """ Test with a list containing only non-empty directories. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", ])) purgeList.addDir(self.buildPath([ "tree9", "dir001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", ])) purgeList.addDir(self.buildPath([ "tree9", "dir001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_003(self): """ Test with a list containing only empty directories. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) purgeList.addDir(self.buildPath([ "tree2", "dir003", ])) purgeList.addDir(self.buildPath([ "tree2", "dir004", ])) purgeList.addDir(self.buildPath([ "tree2", "dir005", ])) purgeList.addDir(self.buildPath([ "tree2", "dir006", ])) purgeList.addDir(self.buildPath([ "tree2", "dir007", ])) purgeList.addDir(self.buildPath([ "tree2", "dir008", ])) purgeList.addDir(self.buildPath([ "tree2", "dir009", ])) purgeList.addDir(self.buildPath([ "tree2", "dir010", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(10, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) def testPurgeItems_004(self): """ Test with a list containing only files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(7, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testPurgeItems_005(self): """ Test with a list containing a directory and some of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(4, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_006(self): """ Test with a list containing a directory and all of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(7, files) self.failUnlessEqual(1, dirs) self.failUnlessRaises(ValueError, fsList.addDirContents, path) self.failUnless(not os.path.exists(path)) def testPurgeItems_007(self): """ Test with a list containing various kinds of entries, including links, files and directories. Make sure that removing a link doesn't remove the file the link points toward. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree9", "dir001", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "dir001", ])) purgeList.addFile(self.buildPath([ "tree9", "file001", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(1, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(18, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(os.path.islink(self.buildPath([ "tree9", "dir002", "link001", ]))) # won't be included in list, though self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_008(self): """ Test with a list containing non-existent entries. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.append(self.buildPath([ "tree1", INVALID_FILE, ])) # bypass validations (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(4, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_009(self): """ Test with a list containing entries containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree11", "file with spaces", ])) purgeList.addFile(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree11", "file with spaces", ])) purgeList.addFile(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) not in fsList) # file it points to was removed self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) not in fsList) # file it points to was removed self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname, within=None): """Extracts a tarfile with a particular name.""" if within is None: extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) else: path = os.path.join(self.tmpdir, within) os.mkdir(path) extractTar(path, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ######################### # Test compareContents() ######################### def testCompareContents_001(self): """ Compare two empty directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", "dir002", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_002(self): """ Compare one empty and one non-empty directory containing only directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_003(self): """ Compare one empty and one non-empty directory containing only files. """ self.extractTar("tree2", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree1", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_004(self): """ Compare two directories containing only directories, same. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_005(self): """ Compare two directories containing only directories, different set. """ self.extractTar("tree2", within="path1") self.extractTar("tree3", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree3", ]) compareContents(path1, path2) # no error, since directories don't count compareContents(path1, path2, verbose=True) # no error, since directories don't count def testCompareContents_006(self): """ Compare two directories containing only files, same. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_007(self): """ Compare two directories containing only files, different contents. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) open(self.buildPath(["path1", "tree1", "file004", ]), "a").write("BOGUS") # change content self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_008(self): """ Compare two directories containing only files, different set. """ self.extractTar("tree1", within="path1") self.extractTar("tree7", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree7", "dir001", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_009(self): """ Compare two directories containing files and directories, same. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_010(self): """ Compare two directories containing files and directories, different contents. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) open(self.buildPath(["path2", "tree9", "dir001", "file002", ]), "a").write("whoops") # change content self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_011(self): """ Compare two directories containing files and directories, different set. """ self.extractTar("tree9", within="path1") self.extractTar("tree6", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree6", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFilesystemList, 'test'), unittest.makeSuite(TestBackupFileList, 'test'), unittest.makeSuite(TestPurgeItemList, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/knapsacktests.py0000664000175000017500000027761311415165677022645 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: knapsacktests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests knapsack functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/knapsack.py. Code Coverage ============= This module contains individual tests for each of the public functions implemented in knapsack.py: C{firstFit()}, C{bestFit()}, C{worstFit()} and C{alternateFit()}. Note that the tests for each function are pretty much identical and so there's pretty much code duplication. In production code, I would argue that this implies some refactoring is needed. In here, however, I prefer having lots of individual test cases even if there is duplication, because I think this makes it easier to judge the extent of a problem when one exists. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a KNAPSACKTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import unittest from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit ####################################################################### # Module-wide configuration and constants ####################################################################### # These all have random letters for keys because the original data had a,b,c,d, # etc. in ascending order, which actually masked a sorting bug in the implementation. ITEMS_01 = { } ITEMS_02 = { "z" : 0, "^" : 0, "3" : 0, "(" : 0, "[" : 0, "/" : 0, "a" : 0, "r" : 0, } ITEMS_03 = { "k" : 0, "*" : 1, "u" : 10, "$" : 100, "h" : 1000, "?" : 10000, "b" : 100000, "s" : 1000000, } ITEMS_04 = { "l" : 1000000, "G" : 100000, "h" : 10000, "#" : 1000, "a" : 100, "'" : 10, "c" : 1, "t" : 0, } ITEMS_05 = { "n" : 1, "N" : 1, "z" : 1, "@" : 1, "c" : 1, "h" : 1, "d" : 1, "u" : 1, } ITEMS_06 = { "o" : 10, "b" : 10, "G" : 10, "+" : 10, "B" : 10, "O" : 10, "e" : 10, "v" : 10, } ITEMS_07 = { "$" : 100, "K" : 100, "f" : 100, "=" : 100, "n" : 100, "I" : 100, "F" : 100, "w" : 100, } ITEMS_08 = { "y" : 1000, "C" : 1000, "s" : 1000, "f" : 1000, "a" : 1000, "U" : 1000, "g" : 1000, "x" : 1000, } ITEMS_09 = { "7" : 10000, "d" : 10000, "f" : 10000, "g" : 10000, "t" : 10000, "l" : 10000, "h" : 10000, "y" : 10000, } ITEMS_10 = { "5" : 100000, "#" : 100000, "l" : 100000, "t" : 100000, "6" : 100000, "T" : 100000, "i" : 100000, "z" : 100000, } ITEMS_11 = { "t" : 1, "d" : 1, "k" : 100000, "l" : 100000, "7" : 100000, "G" : 100000, "j" : 1, "1" : 1, } ITEMS_12 = { "a" : 10, "e" : 10, "M" : 100000, "u" : 100000, "y" : 100000, "f" : 100000, "k" : 10, "2" : 10, } ITEMS_13 = { "n" : 100, "p" : 100, "b" : 100000, "i" : 100000, "$" : 100000, "/" : 100000, "l" : 100, "3" : 100, } ITEMS_14 = { "b" : 1000, ":" : 1000, "e" : 100000, "O" : 100000, "o" : 100000, "#" : 100000, "m" : 1000, "4" : 1000, } ITEMS_15 = { "c" : 1, "j" : 1, "e" : 1, "H" : 100000, "n" : 100000, "h" : 1, "N" : 1, "5" : 1, } ITEMS_16 = { "a" : 10, "M" : 10, "%" : 10, "'" : 100000, "l" : 100000, "?" : 10, "o" : 10, "6" : 10, } ITEMS_17 = { "h" : 100, "z" : 100, "(" : 100, "?" : 100000, "k" : 100000, "|" : 100, "p" : 100, "7" : 100, } ITEMS_18 = { "[" : 1000, "l" : 1000, "*" : 1000, "/" : 100000, "z" : 100000, "|" : 1000, "q" : 1000, "h" : 1000, } # This is a more realistic example, taken from tree9.tar.gz ITEMS_19 = { 'dir001/file001': 243, 'dir001/file002': 268, 'dir002/file001': 134, 'dir002/file002': 74, 'file001' : 155, 'file002' : 242, 'link001' : 0, 'link002' : 0, } ####################################################################### # Utility functions ####################################################################### def buildItemDict(origDict): """ Creates an item dictionary suitable for passing to a knapsack function. The knapsack functions take a dictionary, keyed on item, of (item, size) tuples. This function converts a simple item/size dictionary to a knapsack dictionary. It exists for convenience. @param origDict: Dictionary to convert @type origDict: Simple dictionary mapping item to size, like C{ITEMS_02} @return: Dictionary suitable for passing to a knapsack function. """ itemDict = { } for key in origDict.keys(): itemDict[key] = (key, origDict[key]) return itemDict ####################################################################### # Test Case Classes ####################################################################### ##################### # TestKnapsack class ##################### class TestKnapsack(unittest.TestCase): """Tests for the various knapsack functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################ # Tests for firstFit() function ################################ def testFirstFit_001(self): """ Test firstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_002(self): """ Test firstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_003(self): """ Test firstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_004(self): """ Test firstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_005(self): """ Test firstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testFirstFit_006(self): """ Test firstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testFirstFit_007(self): """ Test firstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testFirstFit_008(self): """ Test firstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testFirstFit_009(self): """ Test firstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_010(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testFirstFit_011(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testFirstFit_012(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testFirstFit_013(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_014(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_015(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_016(self): """ Test firstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testFirstFit_017(self): """ Test firstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) # Unfortunately, can't test any more than this, since dict keys come out in random order ############################### # Tests for bestFit() function ############################### def testBestFit_001(self): """ Test bestFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_002(self): """ Test bestFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_003(self): """ Test bestFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_004(self): """ Test bestFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_005(self): """ Test bestFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testBestFit_006(self): """ Test bestFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testBestFit_007(self): """ Test bestFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testBestFit_008(self): """ Test bestFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testBestFit_009(self): """ Test bestFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_010(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testBestFit_011(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testBestFit_012(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testBestFit_013(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_014(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_015(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_016(self): """ Test bestFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testBestFit_017(self): """ Test bestFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(5, len(result[0])) self.failUnlessEqual(753, result[1]) self.failUnless('dir001/file001' in result[0]) self.failUnless('dir001/file002' in result[0]) self.failUnless('file002' in result[0]) self.failUnless('link001' in result[0]) self.failUnless('link002' in result[0]) ################################ # Tests for worstFit() function ################################ def testWorstFit_001(self): """ Test worstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_002(self): """ Test worstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_003(self): """ Test worstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_004(self): """ Test worstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_005(self): """ Test worstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testWorstFit_006(self): """ Test worstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testWorstFit_007(self): """ Test worstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testWorstFit_008(self): """ Test worstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testWorstFit_009(self): """ Test worstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_010(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testWorstFit_011(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testWorstFit_012(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testWorstFit_013(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_014(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_015(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_016(self): """ Test worstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testWorstFit_017(self): """ Test worstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(605, result[1]) self.failUnless('dir002/file001' in result[0]) self.failUnless('dir002/file002' in result[0]) self.failUnless('file001' in result[0]) self.failUnless('file002' in result[0]) self.failUnless('link001' in result[0]) self.failUnless('link002' in result[0]) #################################### # Tests for alternateFit() function #################################### def testAlternateFit_001(self): """ Test alternateFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_002(self): """ Test alternateFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_003(self): """ Test alternateFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_004(self): """ Test alternateFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_005(self): """ Test alternateFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testAlternateFit_006(self): """ Test alternateFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testAlternateFit_007(self): """ Test alternateFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testAlternateFit_008(self): """ Test alternateFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testAlternateFit_009(self): """ Test alternateFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_010(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testAlternateFit_011(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testAlternateFit_012(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testAlternateFit_013(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_014(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_015(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_016(self): """ Test alternateFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testAlternateFit_017(self): """ Test alternateFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(719, result[1]) self.failUnless('link001' in result[0]) self.failUnless('dir001/file002' in result[0]) self.failUnless('link002' in result[0]) self.failUnless('dir001/file001' in result[0]) self.failUnless('dir002/file002' in result[0]) self.failUnless('dir002/file001' in result[0]) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestKnapsack, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/testcase/encrypttests.py0000664000175000017500000017563111415165677022533 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python (>= 2.5) # Project : Cedar Backup, release 2 # Revision : $Id: encrypttests.py 1006 2010-07-07 21:03:57Z pronovic $ # Purpose : Tests encrypt extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/encrypt.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/encrypt.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set ENCRYPTTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that for some tests, GPG must have access to the public key for "Kenneth J. Pronovici". There is also an assumption that GPG does I{not} have access to a public key for anyone named "Bogus J. User" (so we can test failure scenarios). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises, platformSupportsLinks from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.encrypt import LocalConfig, EncryptConfig from CedarBackup2.extend.encrypt import _encryptFileWithGpg, _encryptFile, _encryptDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "encrypt.conf.1", "encrypt.conf.2", "tree1.tar.gz", "tree2.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] VALID_GPG_RECIPIENT = "Kenneth J. Pronovici" INVALID_GPG_RECIPIENT = "Bogus J. User" INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "ENCRYPTTESTS_FULL" in os.environ: return os.environ["ENCRYPTTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestEncryptConfig class ########################## class TestEncryptConfig(unittest.TestCase): """Tests for the EncryptConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = EncryptConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) self.failUnlessEqual(None, encrypt.encryptTarget) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ encrypt = EncryptConfig("gpg", "Backup User") self.failUnlessEqual("gpg", encrypt.encryptMode) self.failUnlessEqual("Backup User", encrypt.encryptTarget) def testConstructor_003(self): """ Test assignment of encryptMode attribute, None value. """ encrypt = EncryptConfig(encryptMode="gpg") self.failUnlessEqual("gpg", encrypt.encryptMode) encrypt.encryptMode = None self.failUnlessEqual(None, encrypt.encryptMode) def testConstructor_004(self): """ Test assignment of encryptMode attribute, valid value. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) encrypt.encryptMode = "gpg" self.failUnlessEqual("gpg", encrypt.encryptMode) def testConstructor_005(self): """ Test assignment of encryptMode attribute, invalid value (empty). """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) self.failUnlessAssignRaises(ValueError, encrypt, "encryptMode", "") self.failUnlessEqual(None, encrypt.encryptMode) def testConstructor_006(self): """ Test assignment of encryptTarget attribute, None value. """ encrypt = EncryptConfig(encryptTarget="Backup User") self.failUnlessEqual("Backup User", encrypt.encryptTarget) encrypt.encryptTarget = None self.failUnlessEqual(None, encrypt.encryptTarget) def testConstructor_007(self): """ Test assignment of encryptTarget attribute, valid value. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptTarget) encrypt.encryptTarget = "Backup User" self.failUnlessEqual("Backup User", encrypt.encryptTarget) def testConstructor_008(self): """ Test assignment of encryptTarget attribute, invalid value (empty). """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptTarget) self.failUnlessAssignRaises(ValueError, encrypt, "encryptTarget", "") self.failUnlessEqual(None, encrypt.encryptTarget) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig() self.failUnlessEqual(encrypt1, encrypt2) self.failUnless(encrypt1 == encrypt2) self.failUnless(not encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(encrypt1 >= encrypt2) self.failUnless(not encrypt1 != encrypt2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ encrypt1 = EncryptConfig("gpg", "Backup User") encrypt2 = EncryptConfig("gpg", "Backup User") self.failUnlessEqual(encrypt1, encrypt2) self.failUnless(encrypt1 == encrypt2) self.failUnless(not encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(encrypt1 >= encrypt2) self.failUnless(not encrypt1 != encrypt2) def testComparison_003(self): """ Test comparison of two differing objects, encryptMode differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptMode="gpg") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) # Note: no test to check when encrypt mode differs, since only one value is allowed def testComparison_004(self): """ Test comparison of two differing objects, encryptTarget differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptTarget="Backup User") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) def testComparison_005(self): """ Test comparison of two differing objects, encryptTarget differs. """ encrypt1 = EncryptConfig("gpg", "Another User") encrypt2 = EncryptConfig("gpg", "Backup User") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the encrypt configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.encrypt) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.encrypt) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["encrypt.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of encrypt attribute, None value. """ config = LocalConfig() config.encrypt = None self.failUnlessEqual(None, config.encrypt) def testConstructor_005(self): """ Test assignment of encrypt attribute, valid value. """ config = LocalConfig() config.encrypt = EncryptConfig() self.failUnlessEqual(EncryptConfig(), config.encrypt) def testConstructor_006(self): """ Test assignment of encrypt attribute, invalid value (not EncryptConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "encrypt", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.encrypt = EncryptConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, encrypt differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, encrypt differs. """ config1 = LocalConfig() config1.encrypt = EncryptConfig(encryptTarget="Another User") config2 = LocalConfig() config2.encrypt = EncryptConfig(encryptTarget="Backup User") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None encrypt section. """ config = LocalConfig() config.encrypt = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty encrypt section. """ config = LocalConfig() config.encrypt = EncryptConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty encrypt section with no values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty encrypt section with only one value filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", None) self.failUnlessRaises(ValueError, config.validate) config.encrypt = EncryptConfig(None, "Backup User") self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty encrypt section with valid values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", "Backup User") config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["encrypt.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.encrypt) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.encrypt) def testParse_002(self): """ Parse config document with filled-in values. """ path = self.resources["encrypt.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.encrypt) self.failUnlessEqual("gpg", config.encrypt.encryptMode) self.failUnlessEqual("Backup User", config.encrypt.encryptTarget) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.encrypt) self.failUnlessEqual("gpg", config.encrypt.encryptMode) self.failUnlessEqual("Backup User", config.encrypt.encryptTarget) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ encrypt = EncryptConfig() config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set. """ encrypt = EncryptConfig(encryptMode="gpg", encryptTarget="Backup User") config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in encrypt.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ############################# # Test _encryptFileWithGpg() ############################# def testEncryptFileWithGpg_001(self): """ Test for a non-existent file in a non-existent directory. """ sourceFile = self.buildPath([INVALID_PATH, INVALID_PATH]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_002(self): """ Test for a non-existent file in an existing directory. """ self.extractTar("tree8") sourceFile = self.buildPath(["tree8", "dir001", INVALID_PATH, ]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_003(self): """ Test for an unknown recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) self.failIf(os.path.exists(expectedFile)) self.failUnless(os.path.exists(sourceFile)) def testEncryptFileWithGpg_004(self): """ Test for a valid recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFileWithGpg(sourceFile, VALID_GPG_RECIPIENT) self.failUnlessEqual(actualFile, expectedFile) self.failUnless(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) ###################### # Test _encryptFile() ###################### def testEncryptFile_001(self): """ Test for a mode other than "gpg". """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(ValueError, _encryptFile, sourceFile, "pgp", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_002(self): """ Test for a source path that does not exist. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", INVALID_PATH ]) expectedFile = self.buildPath(["tree1", "%s.gpg" % INVALID_PATH ]) self.failUnlessRaises(ValueError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failIf(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_003(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_004(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=True) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_005(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnlessEqual(actualFile, expectedFile) self.failUnless(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) def testEncryptFile_006(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=True) self.failUnlessEqual(actualFile, expectedFile) self.failIf(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) ########################## # Test _encryptDailyDir() ########################## def testEncryptDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1", "dir001" ]) self.failUnlessRaises(ValueError, _encryptDailyDir, dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) def testEncryptDailyDir_002(self): """ Test with a valid staging directory containing only links. """ if platformSupportsLinks(): self.extractTar("tree15") dailyDir = self.buildPath(["tree15", "dir001" ]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree15", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree15", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) def testEncryptDailyDir_003(self): """ Test with a valid staging directory containing only directories. """ self.extractTar("tree2") dailyDir = self.buildPath(["tree2"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath(["tree2", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir010", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath(["tree2", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir010", ]) in fsList) def testEncryptDailyDir_004(self): """ Test with a valid staging directory containing only files. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree1" ]) in fsList) self.failUnless(self.buildPath(["tree1", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file007", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree1" ]) in fsList) self.failUnless(self.buildPath(["tree1", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file007.gpg", ]) in fsList) def testEncryptDailyDir_005(self): """ Test with a valid staging directory containing files, directories and links, including various files that match the general Cedar Backup indicator file pattern ("cback."). """ self.extractTar("tree16") dailyDir = self.buildPath(["tree16"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) if platformSupportsLinks(): self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) else: self.failUnlessEqual(102, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) # since all links are to files, and the files all changed names, the links are invalid and disappear self.failUnlessEqual(102, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) ####################################################################### # Suite definition ####################################################################### def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestEncryptConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestEncryptConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.22.0/doc/0002775000175000017500000000000012143054371016314 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/doc/docbook.txt0000664000175000017500000000426411163707057020510 0ustar pronovicpronovic00000000000000The Cedar Backup Software Manual, found in manual/src, is written in DocBook Lite. All of the docbook functionality used to build the actual documentation that I distribute is based around a Debian system (or a system with equivalent functionality) as the development system. I built the entire docbook infrastructure based on the Subversion book: http://svnbook.red-bean.com http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/ Some other links that might be useful to you: http://www.sagehill.net/docbookxsl/index.html http://tldp.org/HOWTO/DocBook-Demystification-HOWTO/index.html http://www.vim.org/scripts/script.php?script_id=301 This is the official Docbook XSL documentation. http://wiki.docbook.org/topic/ http://wiki.docbook.org/topic/DocBookDocumentation http://wiki.docbook.org/topic/DocBookXslStylesheetDocs http://docbook.sourceforge.net/release/xsl/current/doc/fo/ These official Docbook documentation is where you want to look for stylesheet options, etc. For instance, these are the docs I used when I wanted to figure out how to put items on new pages in PDF output. The following items need to be installed to build the user manual: apt-get install docbook-xsl apt-get install xsltproc apt-get install fop apt-get install sp # for nsgmls Then, to make images work from within PDF, you need to get the Jimi image library: get jimi1_0.tar.Z from http://java.sun.com/products/jimi/ tar -Zxvf jimi1_0.tar.Z cp Jimi/examples/AppletDemo/JimiProClasses.jar /usr/share/java/jimi-1.0.jar You also need a working XML catalog on your system, because the various DTDs and stylesheets depend on that. There's no point in harcoding paths and keeping local copies of things if the catalog can do that for you. However, if you don't have a catalog, you can probably force things to work. See notes at the top of the various files in util/docbook. The util/validate script is a thin wrapper around the nsgmls validating parser. I took the syntax directly from the Subversion book documentation. http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/README You should run 'make validate' against the manual before checking it in. CedarBackup2-2.22.0/doc/osx/0002775000175000017500000000000012143054372017126 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/doc/osx/stop-automount0000775000175000017500000000070011163707057022073 0ustar pronovicpronovic00000000000000#!/bin/sh # Script to stop the Mac OS X auto mount daemon so we can use cdrtools. # Swiped from online documentation related to X-CD-Roast and reformatted. # Note: this daemon was apparently called autodiskmount in OS X 10.3 and prior. sudo kill -STOP `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` echo "Auto mount process ID `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` stopped." CedarBackup2-2.22.0/doc/osx/notes.txt0000664000175000017500000000324111163707057021023 0ustar pronovicpronovic00000000000000Mac os x notes Tested with my new (August 2005) iBook G4 running 10.4 (Tiger). 1 collect works fine 2 stage works fine 3 purge works fine 4 store has some issues - the code all works, but you end up really having to fight the OS so it gets allowed to work a. the drive identifies itself as having a tray, but doesn't b. the Fink eject program doesn't really work (it hangs) c. OS X insists on having control of every disc via the Finder Users will have to put in a dummy override for eject, maybe to /bin/echo or something, for the write to succeed. Either that, or I'll have to put in some option to override the eject indentification for the drive (ugh!, though maybe eventually other people will need this, too?) Users will need to run a script to stop/start the automount daemon before running cback. However, beware! If you stop this daemon, the soft eject button apparently stops working! It gets worse - you can't mount the disk to do a consistency check (even using hdiutil) when the automount daemon is stopped. The utility just doesn't respond. I think that basically, we're going to have to not recommend using the store command on Mac OS X unless someone with some more expertise can help out with this. The OS just gets too much in the way. At the least, we need to document this stuff and put in some code warnings. Might want to reference XCDRoast stuff: http://www.xcdroast.org/xcdr098/xcdrosX.html The file README.macosX from the cdrtools distribution also contains some useful information in it, that we might be able to incorporate into the manual at some point. CedarBackup2-2.22.0/doc/osx/start-automount0000775000175000017500000000072011163707057022245 0ustar pronovicpronovic00000000000000#!/bin/sh # Script to restart the Mac OS X auto mount daemon once we're done using cdrtools. # Swiped from online documentation related to X-CD-Roast and reformatted. # Note: this daemon was apparently called autodiskmount in OS X 10.3 and prior. sudo kill -CONT `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` echo "Auto mount process ID `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` restarted." CedarBackup2-2.22.0/doc/cback.conf.sample0000664000175000017500000001100011163707057021503 0ustar pronovicpronovic00000000000000 Kenneth J. Pronovici 1.3 Sample sysinfo CedarBackup2.extend.sysinfo executeAction 95 mysql CedarBackup2.extend.mysql executeAction 96 postgresql CedarBackup2.extend.postgresql executeAction 97 subversion CedarBackup2.extend.subversion executeAction 98 mbox CedarBackup2.extend.mbox executeAction 99 encrypt CedarBackup2.extend.encrypt executeAction 299 tuesday /opt/backup/tmp backup group /usr/bin/scp -B cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/stage debian local /opt/backup/collect /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y N weekly 5.1 /opt/backup/stage 7 /opt/backup/collect 0 mlogin bzip2 Y plogin bzip2 N db1 db2 incr bzip2 FSFS /opt/svn/repo1 BDB /opt/svn/repo2 incr bzip2 /home/user1/mail/greylist daily /home/user2/mail gzip gpg Backup User CedarBackup2-2.22.0/doc/interface/0002775000175000017500000000000012143054372020255 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.Diagnostics-class.html0000664000175000017500000007727512143054363027736 0ustar pronovicpronovic00000000000000 CedarBackup2.util.Diagnostics
    Package CedarBackup2 :: Module util :: Class Diagnostics
    [hide private]
    [frames] | no frames]

    Class Diagnostics

    source code

    object --+
             |
            Diagnostics
    

    Class holding runtime diagnostic information.

    Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object.

    Instance Methods [hide private]
     
    __init__(self)
    Constructor for the Diagnostics class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    getValues(self)
    Get a map containing all of the diagnostic values.
    source code
     
    printDiagnostics(self, fd=sys.stdout, prefix='')
    Pretty-print diagnostic information to a file descriptor.
    source code
     
    logDiagnostics(self, method, prefix='')
    Pretty-print diagnostic information using a logger method.
    source code
     
    _buildDiagnosticLines(self, prefix='')
    Build a set of pretty-printed diagnostic lines.
    source code
     
    _getVersion(self)
    Property target to get the Cedar Backup version.
    source code
     
    _getInterpreter(self)
    Property target to get the Python interpreter version.
    source code
     
    _getEncoding(self)
    Property target to get the filesystem encoding.
    source code
     
    _getPlatform(self)
    Property target to get the operating system platform.
    source code
     
    _getLocale(self)
    Property target to get the default locale that is in effect.
    source code
     
    _getTimestamp(self)
    Property target to get a current date/time stamp.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _getMaxLength(values)
    Get the maximum length from among a list of strings.
    source code
    Properties [hide private]
      version
    Cedar Backup version.
      interpreter
    Python interpreter version.
      platform
    Platform identifying information.
      encoding
    Filesystem encoding that is in effect.
      locale
    Locale that is in effect.
      timestamp
    Current timestamp.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Constructor for the Diagnostics class.

    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    getValues(self)

    source code 

    Get a map containing all of the diagnostic values.

    Returns:
    Map from diagnostic name to diagnostic value.

    printDiagnostics(self, fd=sys.stdout, prefix='')

    source code 

    Pretty-print diagnostic information to a file descriptor.

    Parameters:
    • fd - File descriptor used to print information.
    • prefix - Prefix string (if any) to place onto printed lines

    Note: The fd is used rather than print to facilitate unit testing.

    logDiagnostics(self, method, prefix='')

    source code 

    Pretty-print diagnostic information using a logger method.

    Parameters:
    • method - Logger method to use for logging (i.e. logger.info)
    • prefix - Prefix string (if any) to place onto printed lines

    _buildDiagnosticLines(self, prefix='')

    source code 

    Build a set of pretty-printed diagnostic lines.

    Parameters:
    • prefix - Prefix string (if any) to place onto printed lines
    Returns:
    List of strings, not terminated by newlines.

    Property Details [hide private]

    version

    Cedar Backup version.

    Get Method:
    _getVersion(self) - Property target to get the Cedar Backup version.

    interpreter

    Python interpreter version.

    Get Method:
    _getInterpreter(self) - Property target to get the Python interpreter version.

    platform

    Platform identifying information.

    Get Method:
    _getPlatform(self) - Property target to get the operating system platform.

    encoding

    Filesystem encoding that is in effect.

    Get Method:
    _getEncoding(self) - Property target to get the filesystem encoding.

    locale

    Locale that is in effect.

    Get Method:
    _getLocale(self) - Property target to get the default locale that is in effect.

    timestamp

    Current timestamp.

    Get Method:
    _getTimestamp(self) - Property target to get a current date/time stamp.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.release-module.html0000664000175000017500000001754512143054362026304 0ustar pronovicpronovic00000000000000 CedarBackup2.release
    Package CedarBackup2 :: Module release
    [hide private]
    [frames] | no frames]

    Module release

    source code

    Provides location to maintain version information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      AUTHOR = 'Kenneth J. Pronovici'
    Author of software.
      EMAIL = 'pronovic@ieee.org'
    Email address of author.
      COPYRIGHT = '2004-2011,2013'
    Copyright date.
      VERSION = '2.22.0'
    Software version.
      DATE = '09 May 2013'
    Software release date.
      URL = 'http://cedar-backup.sourceforge.net/'
    URL of Cedar Backup webpage.
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.tools-pysrc.html0000664000175000017500000002541612143054365025676 0ustar pronovicpronovic00000000000000 CedarBackup2.tools
    Package CedarBackup2 :: Package tools
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.tools

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Official Cedar Backup Tools 
    14  # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides package initialization 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Official Cedar Backup Tools 
    25   
    26  This package provides official Cedar Backup tools.  Tools are things that feel 
    27  a little like extensions, but don't fit the normal mold of extensions.  For 
    28  instance, they might not be intended to run from cron, or might need to interact 
    29  dynamically with the user (i.e. accept user input). 
    30   
    31  Tools are usually scripts that are run directly from the command line, just 
    32  like the main C{cback} script.  Like the C{cback} script, the majority of a 
    33  tool is implemented in a .py module, and then the script just invokes the 
    34  module's C{cli()} function.  The actual scripts for tools are distributed in 
    35  the util/ directory. 
    36   
    37  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    38  """ 
    39   
    40   
    41  ######################################################################## 
    42  # Package initialization 
    43  ######################################################################## 
    44   
    45  # Using 'from CedarBackup2.tools import *' will just import the modules listed 
    46  # in the __all__ variable. 
    47   
    48  __all__ = [ 'span', ] 
    49   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writer-pysrc.html0000664000175000017500000003570212143054364026050 0ustar pronovicpronovic00000000000000 CedarBackup2.writer
    Package CedarBackup2 :: Module writer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writer

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: writer.py 1022 2011-10-11 23:27:49Z pronovic $ 
    15  # Purpose  : Provides interface backwards compatibility. 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Provides interface backwards compatibility. 
    25   
    26  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    27  support DVD hardware.  All of the writer functionality was moved to the 
    28  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    29  library interface. 
    30   
    31  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    32  """ 
    33   
    34  ######################################################################## 
    35  # Imported modules 
    36  ######################################################################## 
    37   
    38  # pylint: disable=W0611 
    39  from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed 
    40  from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter 
    41  from CedarBackup2.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
    42   
    

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.mysql-module.html0000664000175000017500000000413412143054362030070 0ustar pronovicpronovic00000000000000 mysql

    Module mysql


    Classes

    LocalConfig
    MysqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    MYSQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.split-pysrc.html0000664000175000017500000047421012143054364027156 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split
    Package CedarBackup2 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.split

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010,2013 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Revision : $Id: split.py 1028 2013-03-21 14:33:51Z pronovic $ 
     31  # Purpose  : Provides an extension to split up large files in staging directories. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides an extension to split up large files in staging directories. 
     41   
     42  When this extension is executed, it will look through the configured Cedar 
     43  Backup staging directory for files exceeding a specified size limit, and split 
     44  them down into smaller files using the 'split' utility.  Any directory which 
     45  has already been split (as indicated by the C{cback.split} file) will be 
     46  ignored. 
     47   
     48  This extension requires a new configuration section <split> and is intended 
     49  to be run immediately after the standard stage action or immediately before the 
     50  standard store action.  Aside from its own configuration, it requires the 
     51  options and staging configuration sections in the standard Cedar Backup 
     52  configuration file. 
     53   
     54  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     55  """ 
     56   
     57  ######################################################################## 
     58  # Imported modules 
     59  ######################################################################## 
     60   
     61  # System modules 
     62  import os 
     63  import re 
     64  import logging 
     65   
     66  # Cedar Backup modules 
     67  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     68  from CedarBackup2.xmlutil import createInputDom, addContainerNode 
     69  from CedarBackup2.xmlutil import readFirstChild 
     70  from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     71  from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     72   
     73   
     74  ######################################################################## 
     75  # Module-wide constants and variables 
     76  ######################################################################## 
     77   
     78  logger = logging.getLogger("CedarBackup2.log.extend.split") 
     79   
     80  SPLIT_COMMAND = [ "split", ] 
     81  SPLIT_INDICATOR = "cback.split" 
    
    82 83 84 ######################################################################## 85 # SplitConfig class definition 86 ######################################################################## 87 88 -class SplitConfig(object):
    89 90 """ 91 Class representing split configuration. 92 93 Split configuration is used for splitting staging directories. 94 95 The following restrictions exist on data in this class: 96 97 - The size limit must be a ByteQuantity 98 - The split size must be a ByteQuantity 99 100 @sort: __init__, __repr__, __str__, __cmp__, sizeLimit, splitSize 101 """ 102
    103 - def __init__(self, sizeLimit=None, splitSize=None):
    104 """ 105 Constructor for the C{SplitCOnfig} class. 106 107 @param sizeLimit: Size limit of the files, in bytes 108 @param splitSize: Size that files exceeding the limit will be split into, in bytes 109 110 @raise ValueError: If one of the values is invalid. 111 """ 112 self._sizeLimit = None 113 self._splitSize = None 114 self.sizeLimit = sizeLimit 115 self.splitSize = splitSize
    116
    117 - def __repr__(self):
    118 """ 119 Official string representation for class instance. 120 """ 121 return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize)
    122
    123 - def __str__(self):
    124 """ 125 Informal string representation for class instance. 126 """ 127 return self.__repr__()
    128
    129 - def __cmp__(self, other):
    130 """ 131 Definition of equals operator for this class. 132 Lists within this class are "unordered" for equality comparisons. 133 @param other: Other object to compare to. 134 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 135 """ 136 if other is None: 137 return 1 138 if self.sizeLimit != other.sizeLimit: 139 if self.sizeLimit < other.sizeLimit: 140 return -1 141 else: 142 return 1 143 if self.splitSize != other.splitSize: 144 if self.splitSize < other.splitSize: 145 return -1 146 else: 147 return 1 148 return 0
    149
    150 - def _setSizeLimit(self, value):
    151 """ 152 Property target used to set the size limit. 153 If not C{None}, the value must be a C{ByteQuantity} object. 154 @raise ValueError: If the value is not a C{ByteQuantity} 155 """ 156 if value is None: 157 self._sizeLimit = None 158 else: 159 if not isinstance(value, ByteQuantity): 160 raise ValueError("Value must be a C{ByteQuantity} object.") 161 self._sizeLimit = value
    162
    163 - def _getSizeLimit(self):
    164 """ 165 Property target used to get the size limit. 166 """ 167 return self._sizeLimit
    168
    169 - def _setSplitSize(self, value):
    170 """ 171 Property target used to set the split size. 172 If not C{None}, the value must be a C{ByteQuantity} object. 173 @raise ValueError: If the value is not a C{ByteQuantity} 174 """ 175 if value is None: 176 self._splitSize = None 177 else: 178 if not isinstance(value, ByteQuantity): 179 raise ValueError("Value must be a C{ByteQuantity} object.") 180 self._splitSize = value
    181
    182 - def _getSplitSize(self):
    183 """ 184 Property target used to get the split size. 185 """ 186 return self._splitSize
    187 188 sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") 189 splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity")
    190
    191 192 ######################################################################## 193 # LocalConfig class definition 194 ######################################################################## 195 196 -class LocalConfig(object):
    197 198 """ 199 Class representing this extension's configuration document. 200 201 This is not a general-purpose configuration object like the main Cedar 202 Backup configuration object. Instead, it just knows how to parse and emit 203 split-specific configuration values. Third parties who need to read and 204 write configuration related to this extension should access it through the 205 constructor, C{validate} and C{addConfig} methods. 206 207 @note: Lists within this class are "unordered" for equality comparisons. 208 209 @sort: __init__, __repr__, __str__, __cmp__, split, validate, addConfig 210 """ 211
    212 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    213 """ 214 Initializes a configuration object. 215 216 If you initialize the object without passing either C{xmlData} or 217 C{xmlPath} then configuration will be empty and will be invalid until it 218 is filled in properly. 219 220 No reference to the original XML data or original path is saved off by 221 this class. Once the data has been parsed (successfully or not) this 222 original information is discarded. 223 224 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 225 method will be called (with its default arguments) against configuration 226 after successfully parsing any passed-in XML. Keep in mind that even if 227 C{validate} is C{False}, it might not be possible to parse the passed-in 228 XML document if lower-level validations fail. 229 230 @note: It is strongly suggested that the C{validate} option always be set 231 to C{True} (the default) unless there is a specific need to read in 232 invalid configuration from disk. 233 234 @param xmlData: XML data representing configuration. 235 @type xmlData: String data. 236 237 @param xmlPath: Path to an XML file on disk. 238 @type xmlPath: Absolute path to a file on disk. 239 240 @param validate: Validate the document after parsing it. 241 @type validate: Boolean true/false. 242 243 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 244 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 245 @raise ValueError: If the parsed configuration document is not valid. 246 """ 247 self._split = None 248 self.split = None 249 if xmlData is not None and xmlPath is not None: 250 raise ValueError("Use either xmlData or xmlPath, but not both.") 251 if xmlData is not None: 252 self._parseXmlData(xmlData) 253 if validate: 254 self.validate() 255 elif xmlPath is not None: 256 xmlData = open(xmlPath).read() 257 self._parseXmlData(xmlData) 258 if validate: 259 self.validate()
    260
    261 - def __repr__(self):
    262 """ 263 Official string representation for class instance. 264 """ 265 return "LocalConfig(%s)" % (self.split)
    266
    267 - def __str__(self):
    268 """ 269 Informal string representation for class instance. 270 """ 271 return self.__repr__()
    272
    273 - def __cmp__(self, other):
    274 """ 275 Definition of equals operator for this class. 276 Lists within this class are "unordered" for equality comparisons. 277 @param other: Other object to compare to. 278 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 279 """ 280 if other is None: 281 return 1 282 if self.split != other.split: 283 if self.split < other.split: 284 return -1 285 else: 286 return 1 287 return 0
    288
    289 - def _setSplit(self, value):
    290 """ 291 Property target used to set the split configuration value. 292 If not C{None}, the value must be a C{SplitConfig} object. 293 @raise ValueError: If the value is not a C{SplitConfig} 294 """ 295 if value is None: 296 self._split = None 297 else: 298 if not isinstance(value, SplitConfig): 299 raise ValueError("Value must be a C{SplitConfig} object.") 300 self._split = value
    301
    302 - def _getSplit(self):
    303 """ 304 Property target used to get the split configuration value. 305 """ 306 return self._split
    307 308 split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") 309
    310 - def validate(self):
    311 """ 312 Validates configuration represented by the object. 313 314 Split configuration must be filled in. Within that, both the size limit 315 and split size must be filled in. 316 317 @raise ValueError: If one of the validations fails. 318 """ 319 if self.split is None: 320 raise ValueError("Split section is required.") 321 if self.split.sizeLimit is None: 322 raise ValueError("Size limit must be set.") 323 if self.split.splitSize is None: 324 raise ValueError("Split size must be set.")
    325
    326 - def addConfig(self, xmlDom, parentNode):
    327 """ 328 Adds a <split> configuration section as the next child of a parent. 329 330 Third parties should use this function to write configuration related to 331 this extension. 332 333 We add the following fields to the document:: 334 335 sizeLimit //cb_config/split/size_limit 336 splitSize //cb_config/split/split_size 337 338 @param xmlDom: DOM tree as from C{impl.createDocument()}. 339 @param parentNode: Parent that the section should be appended to. 340 """ 341 if self.split is not None: 342 sectionNode = addContainerNode(xmlDom, parentNode, "split") 343 addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) 344 addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize)
    345
    346 - def _parseXmlData(self, xmlData):
    347 """ 348 Internal method to parse an XML string into the object. 349 350 This method parses the XML document into a DOM tree (C{xmlDom}) and then 351 calls a static method to parse the split configuration section. 352 353 @param xmlData: XML data to be parsed 354 @type xmlData: String data 355 356 @raise ValueError: If the XML cannot be successfully parsed. 357 """ 358 (xmlDom, parentNode) = createInputDom(xmlData) 359 self._split = LocalConfig._parseSplit(parentNode)
    360 361 @staticmethod
    362 - def _parseSplit(parent):
    363 """ 364 Parses an split configuration section. 365 366 We read the following individual fields:: 367 368 sizeLimit //cb_config/split/size_limit 369 splitSize //cb_config/split/split_size 370 371 @param parent: Parent node to search beneath. 372 373 @return: C{EncryptConfig} object or C{None} if the section does not exist. 374 @raise ValueError: If some filled-in value is invalid. 375 """ 376 split = None 377 section = readFirstChild(parent, "split") 378 if section is not None: 379 split = SplitConfig() 380 split.sizeLimit = readByteQuantity(section, "size_limit") 381 split.splitSize = readByteQuantity(section, "split_size") 382 return split
    383
    384 385 ######################################################################## 386 # Public functions 387 ######################################################################## 388 389 ########################### 390 # executeAction() function 391 ########################### 392 393 -def executeAction(configPath, options, config):
    394 """ 395 Executes the split backup action. 396 397 @param configPath: Path to configuration file on disk. 398 @type configPath: String representing a path on disk. 399 400 @param options: Program command-line options. 401 @type options: Options object. 402 403 @param config: Program configuration. 404 @type config: Config object. 405 406 @raise ValueError: Under many generic error conditions 407 @raise IOError: If there are I/O problems reading or writing files 408 """ 409 logger.debug("Executing split extended action.") 410 if config.options is None or config.stage is None: 411 raise ValueError("Cedar Backup configuration is not properly filled in.") 412 local = LocalConfig(xmlPath=configPath) 413 dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) 414 for dailyDir in dailyDirs: 415 _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, 416 config.options.backupUser, config.options.backupGroup) 417 writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) 418 logger.info("Executed the split extended action successfully.")
    419
    420 421 ############################## 422 # _splitDailyDir() function 423 ############################## 424 425 -def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup):
    426 """ 427 Splits large files in a daily staging directory. 428 429 Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, 430 C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. 431 All other files are split. 432 433 @param dailyDir: Daily directory to encrypt 434 @param sizeLimit: Size limit, in bytes 435 @param splitSize: Split size, in bytes 436 @param backupUser: User that target files should be owned by 437 @param backupGroup: Group that target files should be owned by 438 439 @raise ValueError: If the encrypt mode is not supported. 440 @raise ValueError: If the daily staging directory does not exist. 441 """ 442 logger.debug("Begin splitting contents of [%s]." % dailyDir) 443 fileList = getBackupFiles(dailyDir) # ignores indicator files 444 for path in fileList: 445 size = float(os.stat(path).st_size) 446 if size > sizeLimit.bytes: 447 _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) 448 logger.debug("Completed splitting contents of [%s]." % dailyDir)
    449
    450 451 ######################## 452 # _splitFile() function 453 ######################## 454 455 -def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False):
    456 """ 457 Splits the source file into chunks of the indicated size. 458 459 The split files will be owned by the indicated backup user and group. If 460 C{removeSource} is C{True}, then the source file will be removed after it is 461 successfully split. 462 463 @param sourcePath: Absolute path of the source file to split 464 @param splitSize: Encryption mode (only "gpg" is allowed) 465 @param backupUser: User that target files should be owned by 466 @param backupGroup: Group that target files should be owned by 467 @param removeSource: Indicates whether to remove the source file 468 469 @raise IOError: If there is a problem accessing, splitting or removing the source file. 470 """ 471 cwd = os.getcwd() 472 try: 473 if not os.path.exists(sourcePath): 474 raise ValueError("Source path [%s] does not exist." % sourcePath) 475 dirname = os.path.dirname(sourcePath) 476 filename = os.path.basename(sourcePath) 477 prefix = "%s_" % filename 478 bytes = int(splitSize.bytes) # pylint: disable=W0622 479 os.chdir(dirname) # need to operate from directory that we want files written to 480 command = resolveCommand(SPLIT_COMMAND) 481 args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] 482 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) 483 if result != 0: 484 raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) 485 pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) 486 match = pattern.search(output[-1:][0]) 487 if match is None: 488 raise IOError("Unable to parse output from split command.") 489 value = int(match.group(3).strip()) 490 for index in range(0, value): 491 path = "%s%05d" % (prefix, index) 492 if not os.path.exists(path): 493 raise IOError("After call to split, expected file [%s] does not exist." % path) 494 changeOwnership(path, backupUser, backupGroup) 495 if removeSource: 496 if os.path.exists(sourcePath): 497 try: 498 os.remove(sourcePath) 499 logger.debug("Completed removing old file [%s]." % sourcePath) 500 except: 501 raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) 502 finally: 503 os.chdir(cwd)
    504

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mysql-module.html0000664000175000017500000006045612143054362027316 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql
    Package CedarBackup2 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Module mysql

    source code

    Provides an extension to back up MySQL databases.

    This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section <mysql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another.

    The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice.

    The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

      [mysqldump]
      user     = root
      password = <secret>
    

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MysqlConfig
    Class representing MySQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the MySQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the MySQL dump.
    source code
     
    backupDatabase(user, password, backupFile, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.mysql")
      MYSQLDUMP_COMMAND = ['mysqldump']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the MySQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database (if any).
    • password - Password associated with user (if any).
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the MySQL dump.

    The filename is either "mysqldump.txt" or "mysqldump-<database>.txt". The ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename)

    backupDatabase(user, password, backupFile, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call always results a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up.

    This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in /root/.my.cnf, to provide mysqldump with the root database username and its password:

      [mysqldump]
      user     = root
      password = <secret>
    

    If you are executing this function as some system user other than root, then the .my.cnf file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode 0600) on .my.cnf to make sure that other users cannot read the file.

    Parameters:
    • user (String representing MySQL username, or None) - User to use for connecting to the database (if any)
    • password (String representing MySQL password, or None) - Password associated with user (if any)
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.PostActionHook-class.html0000664000175000017500000003250612143054362030646 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PostActionHook
    Package CedarBackup2 :: Module config :: Class PostActionHook
    [hide private]
    [frames] | no frames]

    Class PostActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PostActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PostActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PostActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.BDBRepository-class.html0000664000175000017500000003377612143054363032664 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.BDBRepository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class BDBRepository
    [hide private]
    [frames] | no frames]

    Class BDBRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                BDBRepository
    

    Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the BDBRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the BDBRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.validate-module.html0000664000175000017500000006736312143054362030117 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.validate
    Package CedarBackup2 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Module validate

    source code

    Implements the standard 'validate' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeValidate(configPath, options, config)
    Executes the validate action.
    source code
     
    _checkDir(path, writable, logfunc, prefix)
    Checks that the indicated directory is OK.
    source code
     
    _validateReference(config, logfunc)
    Execute runtime validations on reference configuration.
    source code
     
    _validateOptions(config, logfunc)
    Execute runtime validations on options configuration.
    source code
     
    _validateCollect(config, logfunc)
    Execute runtime validations on collect configuration.
    source code
     
    _validateStage(config, logfunc)
    Execute runtime validations on stage configuration.
    source code
     
    _validateStore(config, logfunc)
    Execute runtime validations on store configuration.
    source code
     
    _validatePurge(config, logfunc)
    Execute runtime validations on purge configuration.
    source code
     
    _validateExtensions(config, logfunc)
    Execute runtime validations on extensions configuration.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.validate")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeValidate(configPath, options, config)

    source code 

    Executes the validate action.

    This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems.

    There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time.

    Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - If some configuration value is invalid.

    _checkDir(path, writable, logfunc, prefix)

    source code 

    Checks that the indicated directory is OK.

    The path must exist, must be a directory, must be readable and executable, and must optionally be writable.

    Parameters:
    • path - Path to check.
    • writable - Check that path is writable.
    • logfunc - Function to use for logging errors.
    • prefix - Prefix to use on logged errors.
    Returns:
    True if the directory is OK, False otherwise.

    _validateReference(config, logfunc)

    source code 

    Execute runtime validations on reference configuration.

    We only validate that reference configuration exists at all.

    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateOptions(config, logfunc)

    source code 

    Execute runtime validations on options configuration.

    The following validations are enforced:

    • The options section must exist
    • The working directory must exist and must be writable
    • The backup user and backup group must exist
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateCollect(config, logfunc)

    source code 

    Execute runtime validations on collect configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each of the individual collect directories must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateStage(config, logfunc)

    source code 

    Execute runtime validations on stage configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each local peer's collect directory must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    Note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now.

    _validateStore(config, logfunc)

    source code 

    Execute runtime validations on store configuration.

    The following validations are enforced:

    • The source directory must exist and must be readable
    • The backup device (path and SCSI device) must be valid
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validatePurge(config, logfunc)

    source code 

    Execute runtime validations on purge configuration.

    The following validations are enforced:

    • Each purge directory must exist and must be writable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validateExtensions(config, logfunc)

    source code 

    Execute runtime validations on extensions configuration.

    The following validations are enforced:

    • Each indicated extension function must exist.
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem.PurgeItemList-class.html0000664000175000017500000006504112143054363031417 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.PurgeItemList
    Package CedarBackup2 :: Module filesystem :: Class PurgeItemList
    [hide private]
    [frames] | no frames]

    Class PurgeItemList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    PurgeItemList
    

    List of files and directories to be purged.

    A PurgeItemList is a FilesystemList containing a list of files and directories to be purged. On top of the generic functionality provided by FilesystemList, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem.

    The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in /opt/backup/collect, that directory doesn't get removed once all of the files within it is gone.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeYoungFiles(self, daysOld)
    Removes from the list files younger than a certain age (in days).
    source code
     
    purgeItems(self)
    Purges all items in the list.
    source code

    Inherited from FilesystemList: addDir, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (but not the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf - Ignored in this subclass.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Depth of soft links that should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDirContents
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeYoungFiles(self, daysOld)

    source code 

    Removes from the list files younger than a certain age (in days).

    Any file whose "age" in days is less than (<) the value of the daysOld parameter will be removed from the list so that it will not be purged later when purgeItems is called. Directories and soft links will be ignored.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Parameters:
    • daysOld (Integer value >= 0.) - Minimum age of files that are to be kept in the list.
    Returns:
    Number of entries removed

    Note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items from the list, not from the filesystem! It removes from the list those items that you would not want to purge because they are too young. As an example, passing in daysOld of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one.

    purgeItems(self)

    source code 

    Purges all items in the list.

    Every item in the list will be purged. Directories in the list will not be purged recursively, and hence will only be removed if they are empty. Errors will be ignored.

    To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories.

    Returns:
    Tuple containing count of (files, dirs) removed

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.util-module.html0000664000175000017500000006554512143054362027303 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.util
    Package CedarBackup2 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Implements action-related utilities


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findDailyDirs(stagingDir, indicatorFile)
    Returns a list of all daily staging directories that do not contain the indicated indicator file.
    source code
     
    createWriter(config)
    Creates a writer object based on current configuration.
    source code
     
    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)
    Writes an indicator file into a target directory.
    source code
     
    getBackupFiles(targetDir)
    Gets a list of backup files in a target directory.
    source code
     
    checkMediaState(storeConfig)
    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.
    source code
     
    initializeMediaState(config)
    Initializes state of the media in the backup device so Cedar Backup can recognize it.
    source code
     
    buildMediaLabel()
    Builds a media label to be used on Cedar Backup media.
    source code
     
    _getDeviceType(config)
    Gets the device type that should be used for storing.
    source code
     
    _getMediaType(config)
    Gets the media type that should be used for storing.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.util")
      MEDIA_LABEL_PREFIX = 'CEDAR BACKUP'
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    findDailyDirs(stagingDir, indicatorFile)

    source code 

    Returns a list of all daily staging directories that do not contain the indicated indicator file.

    Parameters:
    • stagingDir - Configured staging directory (config.targetDir)
    Returns:
    List of absolute paths to daily staging directories.

    createWriter(config)

    source code 

    Creates a writer object based on current configuration.

    This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with.

    Currently, the cdwriter and dvdwriter device types are allowed. An exception will be raised if any other device type is used.

    This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer.

    Parameters:
    • config - Config object.
    Returns:
    Writer that can be used to write a directory to some media.
    Raises:
    • ValueError - If there is a problem getting the writer.
    • IOError - If there is a problem creating the writer object.

    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)

    source code 

    Writes an indicator file into a target directory.

    Parameters:
    • targetDir - Target directory in which to write indicator
    • indicatorFile - Name of the indicator file
    • backupUser - User that indicator file should be owned by
    • backupGroup - Group that indicator file should be owned by
    Raises:
    • IOException - If there is a problem writing the indicator file

    getBackupFiles(targetDir)

    source code 

    Gets a list of backup files in a target directory.

    Files that match INDICATOR_PATTERN (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored.

    Parameters:
    • targetDir - Directory to look in
    Returns:
    List of backup files in the directory
    Raises:
    • ValueError - If the target directory does not exist

    checkMediaState(storeConfig)

    source code 

    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.

    We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized.

    The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a None media label, since this kind of media cannot safely be initialized.

    Parameters:
    • storeConfig - Store configuration
    Raises:
    • ValueError - If media is not initialized.

    initializeMediaState(config)

    source code 

    Initializes state of the media in the backup device so Cedar Backup can recognize it.

    This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label.

    Parameters:
    • config - Cedar Backup configuration
    Raises:
    • ValueError - If media could not be initialized.
    • ValueError - If the configured media type is not rewritable

    Note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup.

    buildMediaLabel()

    source code 

    Builds a media label to be used on Cedar Backup media.

    Returns:
    Media label as a string.

    _getDeviceType(config)

    source code 

    Gets the device type that should be used for storing.

    Use the configured device type if not None, otherwise use config.DEFAULT_DEVICE_TYPE.

    Parameters:
    • config - Config object.
    Returns:
    Device type to be used.

    _getMediaType(config)

    source code 

    Gets the media type that should be used for storing.

    Use the configured media type if not None, otherwise use DEFAULT_MEDIA_TYPE.

    Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:

      MEDIA_CDR_74
      MEDIA_CDRW_74
      MEDIA_CDR_80
      MEDIA_CDRW_80
      MEDIA_DVDPLUSR
      MEDIA_DVDPLUSRW
    
    Parameters:
    • config - Config object.
    Returns:
    Media type to be used as a writer media type value.
    Raises:
    • ValueError - If the media type is not valid.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli._ManagedActionItem-class.html0000664000175000017500000003743112143054362030716 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ManagedActionItem
    Package CedarBackup2 :: Module cli :: Class _ManagedActionItem
    [hide private]
    [frames] | no frames]

    Class _ManagedActionItem

    source code

    object --+
             |
            _ManagedActionItem
    

    Class representing a single action to be executed on a managed peer.

    This class represents a single named action to be executed, and understands how to execute that action.

    Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself.


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, remotePeers)
    Default constructor.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the managed action associated with an item.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 1
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, remotePeers)
    (Constructor)

    source code 

    Default constructor.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • remotePeers - List of remote peers on which to execute the action.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the managed action associated with an item.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.
    Notes:
    • Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface.
    • Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.purge-pysrc.html0000664000175000017500000007563012143054365027322 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.purge
    Package CedarBackup2 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.purge

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: purge.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Implements the standard 'purge' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'purge' action. 
     41  @sort: executePurge 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import logging 
     52   
     53  # Cedar Backup modules 
     54  from CedarBackup2.filesystem import PurgeItemList 
     55   
     56   
     57  ######################################################################## 
     58  # Module-wide constants and variables 
     59  ######################################################################## 
     60   
     61  logger = logging.getLogger("CedarBackup2.log.actions.purge") 
     62   
     63   
     64  ######################################################################## 
     65  # Public functions 
     66  ######################################################################## 
     67   
     68  ########################## 
     69  # executePurge() function 
     70  ########################## 
     71   
    
    72 -def executePurge(configPath, options, config):
    73 """ 74 Executes the purge backup action. 75 76 For each configured directory, we create a purge item list, remove from the 77 list anything that's younger than the configured retain days value, and then 78 purge from the filesystem what's left. 79 80 @param configPath: Path to configuration file on disk. 81 @type configPath: String representing a path on disk. 82 83 @param options: Program command-line options. 84 @type options: Options object. 85 86 @param config: Program configuration. 87 @type config: Config object. 88 89 @raise ValueError: Under many generic error conditions 90 """ 91 logger.debug("Executing the 'purge' action.") 92 if config.options is None or config.purge is None: 93 raise ValueError("Purge configuration is not properly filled in.") 94 if config.purge.purgeDirs is not None: 95 for purgeDir in config.purge.purgeDirs: 96 purgeList = PurgeItemList() 97 purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory 98 purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged 99 purgeList.purgeItems() # remove remaining items from the filesystem 100 logger.info("Executed the 'purge' action successfully.")
    101

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.tools.span-pysrc.html0000664000175000017500000064644412143054366026651 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span
    Package CedarBackup2 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.tools.span

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: span.py 999 2010-07-07 19:58:25Z pronovic $ 
     31  # Purpose  : Spans staged data among multiple discs 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Notes 
     37  ######################################################################## 
     38   
     39  """ 
     40  Spans staged data among multiple discs 
     41   
     42  This is the Cedar Backup span tool.  It is intended for use by people who stage 
     43  more data than can fit on a single disc.  It allows a user to split staged data 
     44  among more than one disc.  It can't be an extension because it requires user 
     45  input when switching media. 
     46   
     47  Most configuration is taken from the Cedar Backup configuration file, 
     48  specifically the store section.  A few pieces of configuration are taken 
     49  directly from the user. 
     50   
     51  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     52  """ 
     53   
     54  ######################################################################## 
     55  # Imported modules and constants 
     56  ######################################################################## 
     57   
     58  # System modules 
     59  import sys 
     60  import os 
     61  import logging 
     62  import tempfile 
     63   
     64  # Cedar Backup modules  
     65  from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
     66  from CedarBackup2.util import displayBytes, convertSize, mount, unmount 
     67  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES 
     68  from CedarBackup2.config import Config 
     69  from CedarBackup2.filesystem import BackupFileList, compareDigestMaps, normalizeDir 
     70  from CedarBackup2.cli import Options, setupLogging, setupPathResolver 
     71  from CedarBackup2.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE 
     72  from CedarBackup2.actions.constants import STORE_INDICATOR 
     73  from CedarBackup2.actions.util import createWriter 
     74  from CedarBackup2.actions.store import writeIndicatorFile 
     75  from CedarBackup2.actions.util import findDailyDirs 
     76   
     77   
     78  ######################################################################## 
     79  # Module-wide constants and variables 
     80  ######################################################################## 
     81   
     82  logger = logging.getLogger("CedarBackup2.log.tools.span") 
     83   
     84   
     85  ####################################################################### 
     86  # SpanOptions class 
     87  ####################################################################### 
     88   
    
    89 -class SpanOptions(Options):
    90 91 """ 92 Tool-specific command-line options. 93 94 Most of the cback command-line options are exactly what we need here -- 95 logfile path, permissions, verbosity, etc. However, we need to make a few 96 tweaks since we don't accept any actions. 97 98 Also, a few extra command line options that we accept are really ignored 99 underneath. I just don't care about that for a tool like this. 100 """ 101
    102 - def validate(self):
    103 """ 104 Validates command-line options represented by the object. 105 There are no validations here, because we don't use any actions. 106 @raise ValueError: If one of the validations fails. 107 """ 108 pass
    109 110 111 ####################################################################### 112 # Public functions 113 ####################################################################### 114 115 ################# 116 # cli() function 117 ################# 118
    119 -def cli():
    120 """ 121 Implements the command-line interface for the C{cback-span} script. 122 123 Essentially, this is the "main routine" for the cback-span script. It does 124 all of the argument processing for the script, and then also implements the 125 tool functionality. 126 127 This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not 128 easy to refactor this code to make it reusable and also readable, so I've 129 decided to just live with the duplication. 130 131 A different error code is returned for each type of failure: 132 133 - C{1}: The Python interpreter version is < 2.5 134 - C{2}: Error processing command-line arguments 135 - C{3}: Error configuring logging 136 - C{4}: Error parsing indicated configuration file 137 - C{5}: Backup was interrupted with a CTRL-C or similar 138 - C{6}: Error executing other parts of the script 139 140 @note: This script uses print rather than logging to the INFO level, because 141 it is interactive. Underlying Cedar Backup functionality uses the logging 142 mechanism exclusively. 143 144 @return: Error code as described above. 145 """ 146 try: 147 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5]: 148 sys.stderr.write("Python version 2.5 or greater required.\n") 149 return 1 150 except: 151 # sys.version_info isn't available before 2.0 152 sys.stderr.write("Python version 2.5 or greater required.\n") 153 return 1 154 155 try: 156 options = SpanOptions(argumentList=sys.argv[1:]) 157 except Exception, e: 158 _usage() 159 sys.stderr.write(" *** Error: %s\n" % e) 160 return 2 161 162 if options.help: 163 _usage() 164 return 0 165 if options.version: 166 _version() 167 return 0 168 169 try: 170 logfile = setupLogging(options) 171 except Exception, e: 172 sys.stderr.write("Error setting up logging: %s\n" % e) 173 return 3 174 175 logger.info("Cedar Backup 'span' utility run started.") 176 logger.info("Options were [%s]" % options) 177 logger.info("Logfile is [%s]" % logfile) 178 179 if options.config is None: 180 logger.debug("Using default configuration file.") 181 configPath = DEFAULT_CONFIG 182 else: 183 logger.debug("Using user-supplied configuration file.") 184 configPath = options.config 185 186 try: 187 logger.info("Configuration path is [%s]" % configPath) 188 config = Config(xmlPath=configPath) 189 setupPathResolver(config) 190 except Exception, e: 191 logger.error("Error reading or handling configuration: %s" % e) 192 logger.info("Cedar Backup 'span' utility run completed with status 4.") 193 return 4 194 195 if options.stacktrace: 196 _executeAction(options, config) 197 else: 198 try: 199 _executeAction(options, config) 200 except KeyboardInterrupt: 201 logger.error("Backup interrupted.") 202 logger.info("Cedar Backup 'span' utility run completed with status 5.") 203 return 5 204 except Exception, e: 205 logger.error("Error executing backup: %s" % e) 206 logger.info("Cedar Backup 'span' utility run completed with status 6.") 207 return 6 208 209 logger.info("Cedar Backup 'span' utility run completed with status 0.") 210 return 0
    211 212 213 ####################################################################### 214 # Utility functions 215 ####################################################################### 216 217 #################### 218 # _usage() function 219 #################### 220
    221 -def _usage(fd=sys.stderr):
    222 """ 223 Prints usage information for the cback script. 224 @param fd: File descriptor used to print information. 225 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 226 """ 227 fd.write("\n") 228 fd.write(" Usage: cback-span [switches]\n") 229 fd.write("\n") 230 fd.write(" Cedar Backup 'span' tool.\n") 231 fd.write("\n") 232 fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") 233 fd.write(" It is a utility, not an extension, and requires user interaction.\n") 234 fd.write("\n") 235 fd.write(" The following switches are accepted, mostly to set up underlying\n") 236 fd.write(" Cedar Backup functionality:\n") 237 fd.write("\n") 238 fd.write(" -h, --help Display this usage/help listing\n") 239 fd.write(" -V, --version Display version information\n") 240 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 241 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 242 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 243 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 244 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 245 fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") 246 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 247 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") 248 fd.write("\n")
    249 250 251 ###################### 252 # _version() function 253 ###################### 254
    255 -def _version(fd=sys.stdout):
    256 """ 257 Prints version information for the cback script. 258 @param fd: File descriptor used to print information. 259 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 260 """ 261 fd.write("\n") 262 fd.write(" Cedar Backup 'span' tool.\n") 263 fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 264 fd.write("\n") 265 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 266 fd.write(" See CREDITS for a list of included code and other contributors.\n") 267 fd.write(" This is free software; there is NO warranty. See the\n") 268 fd.write(" GNU General Public License version 2 for copying conditions.\n") 269 fd.write("\n") 270 fd.write(" Use the --help option for usage information.\n") 271 fd.write("\n")
    272 273 274 ############################ 275 # _executeAction() function 276 ############################ 277
    278 -def _executeAction(options, config):
    279 """ 280 Implements the guts of the cback-span tool. 281 282 @param options: Program command-line options. 283 @type options: SpanOptions object. 284 285 @param config: Program configuration. 286 @type config: Config object. 287 288 @raise Exception: Under many generic error conditions 289 """ 290 print "" 291 print "================================================" 292 print " Cedar Backup 'span' tool" 293 print "================================================" 294 print "" 295 print "This the Cedar Backup span tool. It is used to split up staging" 296 print "data when that staging data does not fit onto a single disc." 297 print "" 298 print "This utility operates using Cedar Backup configuration. Configuration" 299 print "specifies which staging directory to look at and which writer device" 300 print "and media type to use." 301 print "" 302 if not _getYesNoAnswer("Continue?", default="Y"): 303 return 304 print "===" 305 306 print "" 307 print "Cedar Backup store configuration looks like this:" 308 print "" 309 print " Source Directory...: %s" % config.store.sourceDir 310 print " Media Type.........: %s" % config.store.mediaType 311 print " Device Type........: %s" % config.store.deviceType 312 print " Device Path........: %s" % config.store.devicePath 313 print " Device SCSI ID.....: %s" % config.store.deviceScsiId 314 print " Drive Speed........: %s" % config.store.driveSpeed 315 print " Check Data Flag....: %s" % config.store.checkData 316 print " No Eject Flag......: %s" % config.store.noEject 317 print "" 318 if not _getYesNoAnswer("Is this OK?", default="Y"): 319 return 320 print "===" 321 322 (writer, mediaCapacity) = _getWriter(config) 323 324 print "" 325 print "Please wait, indexing the source directory (this may take a while)..." 326 (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) 327 print "===" 328 329 print "" 330 print "The following daily staging directories have not yet been written to disc:" 331 print "" 332 for dailyDir in dailyDirs: 333 print " %s" % dailyDir 334 335 totalSize = fileList.totalSize() 336 print "" 337 print "The total size of the data in these directories is %s." % displayBytes(totalSize) 338 print "" 339 if not _getYesNoAnswer("Continue?", default="Y"): 340 return 341 print "===" 342 343 print "" 344 print "Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity) 345 346 print "" 347 print "Since estimates are not perfect and there is some uncertainly in" 348 print "media capacity calculations, it is good to have a \"cushion\"," 349 print "a percentage of capacity to set aside. The cushion reduces the" 350 print "capacity of your media, so a 1.5% cushion leaves 98.5% remaining." 351 print "" 352 cushion = _getFloat("What cushion percentage?", default=4.5) 353 print "===" 354 355 realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity 356 minimumDiscs = (totalSize/realCapacity) + 1 357 print "" 358 print "The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity)) 359 print "It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize)) 360 print "" 361 if not _getYesNoAnswer("Continue?", default="Y"): 362 return 363 print "===" 364 365 happy = False 366 while not happy: 367 print "" 368 print "Which algorithm do you want to use to span your data across" 369 print "multiple discs?" 370 print "" 371 print "The following algorithms are available:" 372 print "" 373 print " first....: The \"first-fit\" algorithm" 374 print " best.....: The \"best-fit\" algorithm" 375 print " worst....: The \"worst-fit\" algorithm" 376 print " alternate: The \"alternate-fit\" algorithm" 377 print "" 378 print "If you don't like the results you will have a chance to try a" 379 print "different one later." 380 print "" 381 algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) 382 print "===" 383 384 print "" 385 print "Please wait, generating file lists (this may take a while)..." 386 spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) 387 print "===" 388 389 print "" 390 print "Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm 391 print "into %d discs." % len(spanSet) 392 print "" 393 counter = 0 394 for item in spanSet: 395 counter += 1 396 print "Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), 397 displayBytes(item.size), item.utilization) 398 print "" 399 if _getYesNoAnswer("Accept this solution?", default="Y"): 400 happy = True 401 print "===" 402 403 counter = 0 404 for spanItem in spanSet: 405 counter += 1 406 if counter == 1: 407 print "" 408 _getReturn("Please place the first disc in your backup device.\nPress return when ready.") 409 print "===" 410 else: 411 print "" 412 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 413 print "===" 414 _writeDisc(config, writer, spanItem) 415 416 _writeStoreIndicator(config, dailyDirs) 417 418 print "" 419 print "Completed writing all discs."
    420 421 422 ############################ 423 # _findDailyDirs() function 424 ############################ 425
    426 -def _findDailyDirs(stagingDir):
    427 """ 428 Returns a list of all daily staging directories that have not yet been 429 stored. 430 431 The store indicator file C{cback.store} will be written to a daily staging 432 directory once that directory is written to disc. So, this function looks 433 at each daily staging directory within the configured staging directory, and 434 returns a list of those which do not contain the indicator file. 435 436 Returned is a tuple containing two items: a list of daily staging 437 directories, and a BackupFileList containing all files among those staging 438 directories. 439 440 @param stagingDir: Configured staging directory 441 442 @return: Tuple (staging dirs, backup file list) 443 """ 444 results = findDailyDirs(stagingDir, STORE_INDICATOR) 445 fileList = BackupFileList() 446 for item in results: 447 fileList.addDirContents(item) 448 return (results, fileList)
    449 450 451 ################################## 452 # _writeStoreIndicator() function 453 ################################## 454
    455 -def _writeStoreIndicator(config, dailyDirs):
    456 """ 457 Writes a store indicator file into daily directories. 458 459 @param config: Config object. 460 @param dailyDirs: List of daily directories 461 """ 462 for dailyDir in dailyDirs: 463 writeIndicatorFile(dailyDir, STORE_INDICATOR, 464 config.options.backupUser, 465 config.options.backupGroup)
    466 467 468 ######################## 469 # _getWriter() function 470 ######################## 471
    472 -def _getWriter(config):
    473 """ 474 Gets a writer and media capacity from store configuration. 475 Returned is a writer and a media capacity in bytes. 476 @param config: Cedar Backup configuration 477 @return: Tuple of (writer, mediaCapacity) 478 """ 479 writer = createWriter(config) 480 mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) 481 return (writer, mediaCapacity)
    482 483 484 ######################## 485 # _writeDisc() function 486 ######################## 487
    488 -def _writeDisc(config, writer, spanItem):
    489 """ 490 Writes a span item to disc. 491 @param config: Cedar Backup configuration 492 @param writer: Writer to use 493 @param spanItem: Span item to write 494 """ 495 print "" 496 _discInitializeImage(config, writer, spanItem) 497 _discWriteImage(config, writer) 498 _discConsistencyCheck(config, writer, spanItem) 499 print "Write process is complete." 500 print "==="
    501
    502 -def _discInitializeImage(config, writer, spanItem):
    503 """ 504 Initialize an ISO image for a span item. 505 @param config: Cedar Backup configuration 506 @param writer: Writer to use 507 @param spanItem: Span item to write 508 """ 509 complete = False 510 while not complete: 511 try: 512 print "Initializing image..." 513 writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) 514 for path in spanItem.fileList: 515 graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) 516 writer.addImageEntry(path, graftPoint) 517 complete = True 518 except KeyboardInterrupt, e: 519 raise e 520 except Exception, e: 521 logger.error("Failed to initialize image: %s" % e) 522 if not _getYesNoAnswer("Retry initialization step?", default="Y"): 523 raise e 524 print "Ok, attempting retry." 525 print "===" 526 print "Completed initializing image."
    527
    528 -def _discWriteImage(config, writer):
    529 """ 530 Writes a ISO image for a span item. 531 @param config: Cedar Backup configuration 532 @param writer: Writer to use 533 """ 534 complete = False 535 while not complete: 536 try: 537 print "Writing image to disc..." 538 writer.writeImage() 539 complete = True 540 except KeyboardInterrupt, e: 541 raise e 542 except Exception, e: 543 logger.error("Failed to write image: %s" % e) 544 if not _getYesNoAnswer("Retry this step?", default="Y"): 545 raise e 546 print "Ok, attempting retry." 547 _getReturn("Please replace media if needed.\nPress return when ready.") 548 print "===" 549 print "Completed writing image."
    550
    551 -def _discConsistencyCheck(config, writer, spanItem):
    552 """ 553 Run a consistency check on an ISO image for a span item. 554 @param config: Cedar Backup configuration 555 @param writer: Writer to use 556 @param spanItem: Span item to write 557 """ 558 if config.store.checkData: 559 complete = False 560 while not complete: 561 try: 562 print "Running consistency check..." 563 _consistencyCheck(config, spanItem.fileList) 564 complete = True 565 except KeyboardInterrupt, e: 566 raise e 567 except Exception, e: 568 logger.error("Consistency check failed: %s" % e) 569 if not _getYesNoAnswer("Retry the consistency check?", default="Y"): 570 raise e 571 if _getYesNoAnswer("Rewrite the disc first?", default="N"): 572 print "Ok, attempting retry." 573 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 574 print "===" 575 _discWriteImage(config, writer) 576 else: 577 print "Ok, attempting retry." 578 print "===" 579 print "Completed consistency check."
    580 581 582 ############################### 583 # _consistencyCheck() function 584 ############################### 585
    586 -def _consistencyCheck(config, fileList):
    587 """ 588 Runs a consistency check against media in the backup device. 589 590 The function mounts the device at a temporary mount point in the working 591 directory, and then compares the passed-in file list's digest map with the 592 one generated from the disc. The two lists should be identical. 593 594 If no exceptions are thrown, there were no problems with the consistency 595 check. 596 597 @warning: The implementation of this function is very UNIX-specific. 598 599 @param config: Config object. 600 @param fileList: BackupFileList whose contents to check against 601 602 @raise ValueError: If the check fails 603 @raise IOError: If there is a problem working with the media. 604 """ 605 logger.debug("Running consistency check.") 606 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 607 try: 608 mount(config.store.devicePath, mountPoint, "iso9660") 609 discList = BackupFileList() 610 discList.addDirContents(mountPoint) 611 sourceList = BackupFileList() 612 sourceList.extend(fileList) 613 discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) 614 sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) 615 compareDigestMaps(sourceListDigest, discListDigest, verbose=True) 616 logger.info("Consistency check completed. No problems found.") 617 finally: 618 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    619 620 621 ######################################################################### 622 # User interface utilities 623 ######################################################################## 624
    625 -def _getYesNoAnswer(prompt, default):
    626 """ 627 Get a yes/no answer from the user. 628 The default will be placed at the end of the prompt. 629 A "Y" or "y" is considered yes, anything else no. 630 A blank (empty) response results in the default. 631 @param prompt: Prompt to show. 632 @param default: Default to set if the result is blank 633 @return: Boolean true/false corresponding to Y/N 634 """ 635 if default == "Y": 636 prompt = "%s [Y/n]: " % prompt 637 else: 638 prompt = "%s [y/N]: " % prompt 639 answer = raw_input(prompt) 640 if answer in [ None, "", ]: 641 answer = default 642 if answer[0] in [ "Y", "y", ]: 643 return True 644 else: 645 return False
    646
    647 -def _getChoiceAnswer(prompt, default, validChoices):
    648 """ 649 Get a particular choice from the user. 650 The default will be placed at the end of the prompt. 651 The function loops until getting a valid choice. 652 A blank (empty) response results in the default. 653 @param prompt: Prompt to show. 654 @param default: Default to set if the result is None or blank. 655 @param validChoices: List of valid choices (strings) 656 @return: Valid choice from user. 657 """ 658 prompt = "%s [%s]: " % (prompt, default) 659 answer = raw_input(prompt) 660 if answer in [ None, "", ]: 661 answer = default 662 while answer not in validChoices: 663 print "Choice must be one of %s" % validChoices 664 answer = raw_input(prompt) 665 return answer
    666
    667 -def _getFloat(prompt, default):
    668 """ 669 Get a floating point number from the user. 670 The default will be placed at the end of the prompt. 671 The function loops until getting a valid floating point number. 672 A blank (empty) response results in the default. 673 @param prompt: Prompt to show. 674 @param default: Default to set if the result is None or blank. 675 @return: Floating point number from user 676 """ 677 prompt = "%s [%.2f]: " % (prompt, default) 678 while True: 679 answer = raw_input(prompt) 680 if answer in [ None, "" ]: 681 return default 682 else: 683 try: 684 return float(answer) 685 except ValueError: 686 print "Enter a floating point number."
    687
    688 -def _getReturn(prompt):
    689 """ 690 Get a return key from the user. 691 @param prompt: Prompt to show. 692 """ 693 raw_input(prompt)
    694 695 696 ######################################################################### 697 # Main routine 698 ######################################################################## 699 700 if __name__ == "__main__": 701 result = cli() 702 sys.exit(result) 703

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.purge-module.html0000664000175000017500000000251212143054362030214 0ustar pronovicpronovic00000000000000 purge

    Module purge


    Functions

    executePurge

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.store-module.html0000664000175000017500000007166512143054362027462 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.store
    Package CedarBackup2 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Module store

    source code

    Implements the standard 'store' action.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Functions [hide private]
     
    executeStore(configPath, options, config)
    Executes the store backup action.
    source code
     
    writeImage(config, newDisc, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    writeStoreIndicator(config, stagingDirs)
    Writes a store indicator file into staging directories.
    source code
     
    consistencyCheck(config, stagingDirs)
    Runs a consistency check against media in the backup device.
    source code
     
    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)
    Gets a value for the newDisc flag based on blanking factor rules.
    source code
     
    _findCorrectDailyDir(options, config)
    Finds the correct daily staging directory to be written to disk.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.store")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeStore(configPath, options, config)

    source code 

    Executes the store backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.
    • When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed.

    writeImage(config, newDisc, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10.

    Parameters:
    • config - Config object.
    • newDisc - Indicates whether the disc should be re-initialized
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    Note: This function is implemented in terms of writeImageBlankSafe. The newDisc flag is passed in for both rebuildMedia and todayIsStart.

    writeStoreIndicator(config, stagingDirs)

    source code 

    Writes a store indicator file into staging directories.

    The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.

    consistencyCheck(config, stagingDirs)

    source code 

    Runs a consistency check against media in the backup device.

    It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc.

    The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in filesystem.py.

    If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with info priority.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10. The media will always be written with a media label specific to Cedar Backup.

    This function is similar to writeImage, but tries to implement a smarter blanking strategy.

    First, the media is always blanked if the rebuildMedia flag is true. Then, if rebuildMedia is false, blanking behavior and todayIsStart come into effect:

      If no blanking behavior is specified, and it is the start of the week,
      the disc will be blanked
    
      If blanking behavior is specified, and either the blank mode is "daily"
      or the blank mode is "weekly" and it is the start of the week, then 
      the disc will be blanked if it looks like the weekly backup will not
      fit onto the media.
    
      Otherwise, the disc will not be blanked
    

    How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:

      will backup fit? = (bytes available / (1 + bytes required) <= blankFactor
    

    The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right.

    Parameters:
    • config - Config object.
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)

    source code 

    Gets a value for the newDisc flag based on blanking factor rules.

    The blanking factor rules are described above by writeImageBlankSafe.

    Parameters:
    • writer - Previously configured image writer containing image entries
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    Returns:
    newDisc flag to be set on writer.

    _findCorrectDailyDir(options, config)

    source code 

    Finds the correct daily staging directory to be written to disk.

    In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite.

    For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after which has not yet been staged, according to the stage indicator file. The first one I find, I'll use. If I use a directory other than for the current day and config.store.warnMidnite is set, a warning will be put in the log.

    There is one exception to this rule. If the options.full flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run cback --full store twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written).

    Parameters:
    • options - Options object.
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If the staging directory cannot be found.

    Note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward.


    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.writer-module.html0000664000175000017500000000211612143054362026747 0ustar pronovicpronovic00000000000000 writer

    Module writer


    Variables

    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox.MboxConfig-class.html0000664000175000017500000010042112143054363030754 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxConfig
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxConfig
    [hide private]
    [frames] | no frames]

    Class MboxConfig

    source code

    object --+
             |
            MboxConfig
    

    Class representing mbox configuration.

    Mbox configuration is used for backing up mbox email files.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The mboxFiles list must be a list of MboxFile objects
    • The mboxDirs list must be a list of MboxDir objects

    For the mboxFiles and mboxDirs lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is of the proper type.

    Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    Constructor for the MboxConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setMboxFiles(self, value)
    Property target used to set the mboxFiles list.
    source code
     
    _getMboxFiles(self)
    Property target used to get the mboxFiles list.
    source code
     
    _setMboxDirs(self, value)
    Property target used to set the mboxDirs list.
    source code
     
    _getMboxDirs(self)
    Property target used to get the mboxDirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      mboxFiles
    List of mbox files to back up.
      mboxDirs
    List of mbox directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    (Constructor)

    source code 

    Constructor for the MboxConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • mboxFiles - List of mbox files to back up
    • mboxDirs - List of mbox directories to back up
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setMboxFiles(self, value)

    source code 

    Property target used to set the mboxFiles list. Either the value must be None or each element must be an MboxFile.

    Raises:
    • ValueError - If the value is not an MboxFile

    _setMboxDirs(self, value)

    source code 

    Property target used to set the mboxDirs list. Either the value must be None or each element must be an MboxDir.

    Raises:
    • ValueError - If the value is not an MboxDir

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    mboxFiles

    List of mbox files to back up.

    Get Method:
    _getMboxFiles(self) - Property target used to get the mboxFiles list.
    Set Method:
    _setMboxFiles(self, value) - Property target used to set the mboxFiles list.

    mboxDirs

    List of mbox directories to back up.

    Get Method:
    _getMboxDirs(self) - Property target used to get the mboxDirs list.
    Set Method:
    _setMboxDirs(self, value) - Property target used to set the mboxDirs list.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.postgresql-module.html0000664000175000017500000000446612143054362031136 0ustar pronovicpronovic00000000000000 postgresql

    Module postgresql


    Classes

    LocalConfig
    PostgresqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    POSTGRESQLDUMPALL_COMMAND
    POSTGRESQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.capacity-module.html0000664000175000017500000002677312143054362027752 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity
    Package CedarBackup2 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Module capacity

    source code

    Provides an extension to check remaining media capacity.

    Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      PercentageQuantity
    Class representing a percentage quantity.
      CapacityConfig
    Class representing capacity configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the capacity action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.capacity")
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the capacity action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    CedarBackup2-2.22.0/doc/interface/CedarBackup2-module.html0000664000175000017500000003443612143054362024663 0ustar pronovicpronovic00000000000000 CedarBackup2
    Package CedarBackup2
    [hide private]
    [frames] | no frames]

    Package CedarBackup2

    source code

    Implements local and remote backups to CD or DVD media.

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli-module.html0000664000175000017500000014536612143054362025436 0ustar pronovicpronovic00000000000000 CedarBackup2.cli
    Package CedarBackup2 :: Module cli
    [hide private]
    [frames] | no frames]

    Module cli

    source code

    Provides command-line interface implementation for the cback script.

    Summary

    The functionality in this module encapsulates the command-line interface for the cback script. The cback script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too).

    The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway.

    Backwards Compatibility

    The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Options
    Class representing command-line options for the cback script.
      _ActionItem
    Class representing a single action to be executed.
      _ManagedActionItem
    Class representing a single action to be executed on a managed peer.
      _ActionSet
    Class representing a set of local actions to be executed.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    setupLogging(options)
    Set up logging based on command-line options.
    source code
     
    _setupLogfile(options)
    Sets up and creates logfile as needed.
    source code
     
    _setupFlowLogging(logfile, options)
    Sets up flow logging.
    source code
     
    _setupOutputLogging(logfile, options)
    Sets up command output logging.
    source code
     
    _setupDiskFlowLogging(flowLogger, logfile, options)
    Sets up on-disk flow logging.
    source code
     
    _setupScreenFlowLogging(flowLogger, options)
    Sets up on-screen flow logging.
    source code
     
    _setupDiskOutputLogging(outputLogger, logfile, options)
    Sets up on-disk command output logging.
    source code
     
    setupPathResolver(config)
    Set up the path resolver singleton based on configuration.
    source code
    Variables [hide private]
      DEFAULT_CONFIG = '/etc/cback.conf'
    The default configuration file.
      DEFAULT_LOGFILE = '/var/log/cback.log'
    The default log file path.
      DEFAULT_OWNERSHIP = ['root', 'adm']
    Default ownership for the logfile.
      DEFAULT_MODE = 416
    Default file permissions mode on the logfile.
      VALID_ACTIONS = ['collect', 'stage', 'store', 'purge', 'rebuil...
    List of valid actions.
      COMBINE_ACTIONS = ['collect', 'stage', 'store', 'purge']
    List of actions which can be combined with other actions.
      NONCOMBINE_ACTIONS = ['rebuild', 'validate', 'initialize', 'all']
    List of actions which cannot be combined with other actions.
      logger = logging.getLogger("CedarBackup2.log.cli")
      DISK_LOG_FORMAT = '%(asctime)s --> [%(levelname)-7s] %(message)s'
      DISK_OUTPUT_FORMAT = '%(message)s'
      SCREEN_LOG_FORMAT = '%(message)s'
      SCREEN_LOG_STREAM = sys.stdout
      DATE_FORMAT = '%Y-%m-%dT%H:%M:%S %Z'
      REBUILD_INDEX = 0
      VALIDATE_INDEX = 0
      INITIALIZE_INDEX = 0
      COLLECT_INDEX = 100
      STAGE_INDEX = 200
      STORE_INDEX = 300
      PURGE_INDEX = 400
      SHORT_SWITCHES = 'hVbqc:fMNl:o:m:OdsD'
      LONG_SWITCHES = ['help', 'version', 'verbose', 'quiet', 'confi...
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback script.

    Essentially, this is the "main routine" for the cback script. It does all of the argument processing for the script, and then sets about executing the indicated actions.

    As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process).

    The 'all' action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the 'all' action.

    Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 2.5
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing specified backup actions
    Returns:
    Error code as described above.
    Notes:
    • This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.)
    • We assume that anything that must be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to sys.stderr.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    setupLogging(options)

    source code 

    Set up logging based on command-line options.

    There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to mkisofs or mount, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (CedarBackup2.log and CedarBackup2.output). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level.

    By default, output logging is disabled. When the options.output or options.debug flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen.

    By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the options.quiet flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the options.verbose flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the options.debug flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile.

    Parameters:
    • options (Options object) - Command-line options.
    Returns:
    Path to logfile on disk.

    _setupLogfile(options)

    source code 

    Sets up and creates logfile as needed.

    If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group.

    Parameters:
    • options - Command-line options.
    Returns:
    Path to logfile on disk.

    Note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using O_CREAT). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback run concurrently or if cback collides with logrotate or something.

    _setupFlowLogging(logfile, options)

    source code 

    Sets up flow logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupOutputLogging(logfile, options)

    source code 

    Sets up command output logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupDiskFlowLogging(flowLogger, logfile, options)

    source code 

    Sets up on-disk flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupScreenFlowLogging(flowLogger, options)

    source code 

    Sets up on-screen flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • options - Command-line options.

    _setupDiskOutputLogging(outputLogger, logfile, options)

    source code 

    Sets up on-disk command output logging.

    Parameters:
    • outputLogger - Python command output logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    setupPathResolver(config)

    source code 

    Set up the path resolver singleton based on configuration.

    Cedar Backup's path resolver is implemented in terms of a singleton, the PathResolverSingleton class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton.

    Parameters:
    • config (Config object) - Configuration

    Variables Details [hide private]

    VALID_ACTIONS

    List of valid actions.
    Value:
    ['collect',
     'stage',
     'store',
     'purge',
     'rebuild',
     'validate',
     'initialize',
     'all']
    

    LONG_SWITCHES

    Value:
    ['help',
     'version',
     'verbose',
     'quiet',
     'config=',
     'full',
     'managed',
     'managed-only',
    ...
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.tools.span-module.html0000664000175000017500000012145612143054362026761 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span
    Package CedarBackup2 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Module span

    source code

    Spans staged data among multiple discs

    This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media.

    Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SpanOptions
    Tool-specific command-line options.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback-span script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback script.
    source code
     
    _executeAction(options, config)
    Implements the guts of the cback-span tool.
    source code
     
    _findDailyDirs(stagingDir)
    Returns a list of all daily staging directories that have not yet been stored.
    source code
     
    _writeStoreIndicator(config, dailyDirs)
    Writes a store indicator file into daily directories.
    source code
     
    _getWriter(config)
    Gets a writer and media capacity from store configuration.
    source code
     
    _writeDisc(config, writer, spanItem)
    Writes a span item to disc.
    source code
     
    _discInitializeImage(config, writer, spanItem)
    Initialize an ISO image for a span item.
    source code
     
    _discWriteImage(config, writer)
    Writes a ISO image for a span item.
    source code
     
    _discConsistencyCheck(config, writer, spanItem)
    Run a consistency check on an ISO image for a span item.
    source code
     
    _consistencyCheck(config, fileList)
    Runs a consistency check against media in the backup device.
    source code
     
    _getYesNoAnswer(prompt, default)
    Get a yes/no answer from the user.
    source code
     
    _getChoiceAnswer(prompt, default, validChoices)
    Get a particular choice from the user.
    source code
     
    _getFloat(prompt, default)
    Get a floating point number from the user.
    source code
     
    _getReturn(prompt)
    Get a return key from the user.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.tools.span")
      __package__ = 'CedarBackup2.tools'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback-span script.

    Essentially, this is the "main routine" for the cback-span script. It does all of the argument processing for the script, and then also implements the tool functionality.

    This function looks pretty similiar to CedarBackup2.cli.cli(). It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 2.5
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing other parts of the script
    Returns:
    Error code as described above.

    Note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _executeAction(options, config)

    source code 

    Implements the guts of the cback-span tool.

    Parameters:
    • options (SpanOptions object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • Exception - Under many generic error conditions

    _findDailyDirs(stagingDir)

    source code 

    Returns a list of all daily staging directories that have not yet been stored.

    The store indicator file cback.store will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file.

    Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories.

    Parameters:
    • stagingDir - Configured staging directory
    Returns:
    Tuple (staging dirs, backup file list)

    _writeStoreIndicator(config, dailyDirs)

    source code 

    Writes a store indicator file into daily directories.

    Parameters:
    • config - Config object.
    • dailyDirs - List of daily directories

    _getWriter(config)

    source code 

    Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes.

    Parameters:
    • config - Cedar Backup configuration
    Returns:
    Tuple of (writer, mediaCapacity)

    _writeDisc(config, writer, spanItem)

    source code 

    Writes a span item to disc.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discInitializeImage(config, writer, spanItem)

    source code 

    Initialize an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discWriteImage(config, writer)

    source code 

    Writes a ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use

    _discConsistencyCheck(config, writer, spanItem)

    source code 

    Run a consistency check on an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _consistencyCheck(config, fileList)

    source code 

    Runs a consistency check against media in the backup device.

    The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical.

    If no exceptions are thrown, there were no problems with the consistency check.

    Parameters:
    • config - Config object.
    • fileList - BackupFileList whose contents to check against
    Raises:
    • ValueError - If the check fails
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    _getYesNoAnswer(prompt, default)

    source code 

    Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is blank
    Returns:
    Boolean true/false corresponding to Y/N

    _getChoiceAnswer(prompt, default, validChoices)

    source code 

    Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    • validChoices - List of valid choices (strings)
    Returns:
    Valid choice from user.

    _getFloat(prompt, default)

    source code 

    Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    Returns:
    Floating point number from user

    _getReturn(prompt)

    source code 

    Get a return key from the user.

    Parameters:
    • prompt - Prompt to show.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend-module.html0000664000175000017500000000215712143054362026727 0ustar pronovicpronovic00000000000000 extend

    Module extend


    Variables


    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.action-module.html0000664000175000017500000000211612143054362026710 0ustar pronovicpronovic00000000000000 action

    Module action


    Variables

    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter.DvdWriter-class.html0000664000175000017500000023200012143054363032107 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.DvdWriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class DvdWriter
    [hide private]
    [frames] | no frames]

    Class DvdWriter

    source code

    object --+
             |
            DvdWriter
    

    Class representing a device that knows how to write some kinds of DVD media.

    Summary

    This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media.

    This class is implemented in terms of the eject and growisofs utilities, all of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers:

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Unlike the CdWriter, the DvdWriter can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used.

    Media Types

    This class knows how to write to DVD+R and DVD+RW media, represented by the following constants:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, DvdWriter does not really differentiate between rewritable and non-rewritable media).

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Device Attributes vs. Media Attributes

    As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes.

    Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way growisofs works relative to cdrecord.

    Media Capacity

    One major difference between the cdrecord/mkisofs utilities used by the cdwriter class and the growisofs utility used here is that the process of estimating remaining capacity and image size is more straightforward with cdrecord/mkisofs than with growisofs.

    In this class, remaining capacity is calculated by asking doing a dry run of growisofs and grabbing some information from the output of that command. Image size is estimated by asking the IsoImage class for an estimate and then adding on a "fudge factor" determined through experimentation.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a DVD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    retrieveCapacity(self, entireDisc=False)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)
    Writes an image to disc using either an entries list or an ISO image on disk.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray via 'eject -i off'.
    source code
     
    _retrieveSectorsUsed(self)
    Retrieves the number of sectors used on the current media.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getEstimatedImageSize(entries)
    Gets the estimated size of a set of image entries.
    source code
     
    _searchForOverburn(output)
    Search for an "overburn" error message in growisofs output.
    source code
     
    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Builds a list of arguments to be passed to a growisofs command.
    source code
     
    _parseSectorsUsed(output)
    Parse sectors used information out of growisofs output.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device (saved for reference only).
      hardwareId
    Hardware id for this writer (always the device path).
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a DVD writer object.

    Since growisofs can only address devices using the device path (i.e. /dev/dvd), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only.

    We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the noEject flag is used to set these values. If noEject=False, then we assume a tray exists and open/close is safe. If noEject=True, then we assume that there is no tray and open/close is not safe.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/dvd) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional, for reference only).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Tells Cedar Backup that the device cannot safely be ejected
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    retrieveCapacity(self, entireDisc=False)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by growisofs itself.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • ValueError - If there is a problem parsing the growisofs output
    • IOError - If the media could not be read for some reason.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called
    • ValueError - If the path is not a valid file or directory

    Note: Before calling this method, you must call initializeImage.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be re-created from scratch. Note that unlike CdWriter, DvdWriter does not blank rewritable media before reusing it; however, growisofs is called such that the media will be re-initialized as needed.

    If imagePath is passed in as None, then the existing image configured with initializeImage() will be used. Under these circumstances, the passed-in newDisc flag will be ignored and the value passed in to initializeImage() will apply instead.

    The writeMulti argument is ignored. It exists for compatibility with the Cedar Backup image writer interface.

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • writeMulti (Boolean true/false) - Unused
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    Note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that dvdwriter will use.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)

    source code 

    Writes an image to disc using either an entries list or an ISO image on disk.

    Callers are assumed to have done validation on paths, etc. before calling this method.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    Raises:
    • IOError - If the media could not be written to for some reason.

    _getEstimatedImageSize(entries)
    Static Method

    source code 

    Gets the estimated size of a set of image entries.

    This is implemented in terms of the IsoImage class. The returned value is calculated by adding a "fudge factor" to the value from IsoImage. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances.

    Parameters:
    • entries - Dictionary mapping path to graft point.
    Returns:
    Total estimated size of image, in bytes.
    Raises:
    • ValueError - If there are no entries in the dictionary
    • ValueError - If any path in the dictionary does not exist
    • IOError - If there is a problem calling mkisofs.

    _searchForOverburn(output)
    Static Method

    source code 

    Search for an "overburn" error message in growisofs output.

    The growisofs command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition.

    The error message looks like this:

      :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!
    

    This method looks for the overburn error message anywhere in the output. If a matching error message is found, an IOError exception is raised containing relevant information about the problem. Otherwise, the method call returns normally.

    Parameters:
    • output - List of output lines to search, as from executeCommand
    Raises:
    • IOError - If an overburn condition is found.

    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Static Method

    source code 

    Builds a list of arguments to be passed to a growisofs command.

    The arguments will either cause growisofs to write the indicated image file to disc, or will pass growisofs a list of directories or files that should be written to disc.

    If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if mediaLabel is not None.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • hardwareId - Hardware id for the device
    • driveSpeed - Speed at which the drive writes.
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    • mediaLabel - Media label to set on the image, if any
    • dryRun - Says whether to make this a dry run (for checking capacity)
    Returns:
    List suitable for passing to util.executeCommand as args.
    Raises:
    • ValueError - If caller does not pass one or the other of imagePath or entries.
    Notes:
    • If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created.
    • We always pass the undocumented option -use-the-force-like=tty to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :(

    unlockTray(self)

    source code 

    Unlocks the device's tray via 'eject -i off'.

    Raises:
    • IOError - If there is an error talking to the device.

    _retrieveSectorsUsed(self)

    source code 

    Retrieves the number of sectors used on the current media.

    This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later.

    Once growisofs has been run, then we call _parseSectorsUsed to parse the output and calculate the number of sectors used on the media.

    Returns:
    Number of sectors used on the media

    _parseSectorsUsed(output)
    Static Method

    source code 

    Parse sectors used information out of growisofs output.

    The first line of a growisofs run looks something like this:

      Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'
    

    Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used.

    If the seek line cannot be found in the output, then sectors used of zero is assumed.

    Returns:
    Sectors used on the media, as a floating point number.
    Raises:
    • ValueError - If the output cannot be parsed properly.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device (saved for reference only).

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer (always the device path).

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli._ActionSet-class.html0000664000175000017500000013244512143054362027277 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ActionSet
    Package CedarBackup2 :: Module cli :: Class _ActionSet
    [hide private]
    [frames] | no frames]

    Class _ActionSet

    source code

    object --+
             |
            _ActionSet
    

    Class representing a set of local actions to be executed.

    This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in NONCOMBINE_ACTIONS) cannot be combined with other actions.

    Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order.

    Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action.

    Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers.

    Instance Methods [hide private]
     
    __init__(self, actions, extensions, options, peers, managed, local)
    Constructor for the _ActionSet class.
    source code
     
    executeActions(self, configPath, options, config)
    Executes all actions and extended actions, in the proper order.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _deriveExtensionNames(extensions)
    Builds a list of extended actions that are available in configuration.
    source code
     
    _buildHookMaps(hooks)
    Build two mappings from action name to configured ActionHook.
    source code
     
    _buildFunctionMap(extensions)
    Builds a mapping from named action to action function.
    source code
     
    _buildIndexMap(extensions)
    Builds a mapping from action name to proper execution index.
    source code
     
    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Builds a mapping from action name to list of action items.
    source code
     
    _buildPeerMap(options, peers)
    Build a mapping from action name to list of remote peers.
    source code
     
    _deriveHooks(action, preHookDict, postHookDict)
    Derive pre- and post-action hooks, if any, associated with named action.
    source code
     
    _validateActions(actions, extensionNames)
    Validate that the set of specified actions is sensible.
    source code
     
    _buildActionSet(actions, actionMap)
    Build set of actions to be executed.
    source code
     
    _getRemoteUser(options, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRshCommand(options, remotePeer)
    Gets the RSH command associated with a remote peer.
    source code
     
    _getCbackCommand(options, remotePeer)
    Gets the cback command associated with a remote peer.
    source code
     
    _getManagedActions(options, remotePeer)
    Gets the managed actions list associated with a remote peer.
    source code
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions, extensions, options, peers, managed, local)
    (Constructor)

    source code 

    Constructor for the _ActionSet class.

    This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input:

    • extensionNames: List of extensions available in configuration
    • preHookMap: Mapping from action name to pre ActionHook
    • preHookMap: Mapping from action name to post ActionHook
    • functionMap: Mapping from action name to Python function
    • indexMap: Mapping from action name to execution index
    • peerMap: Mapping from action name to set of RemotePeer
    • actionMap: Mapping from action name to _ActionItem

    Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build self.actionSet, the set action items to be executed by executeActions(). This list might contain either _ActionItem or _ManagedActionItem.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensions - Extended action configuration (i.e. config.extensions)
    • options - Options configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    Raises:
    • ValueError - If one of the specified actions is invalid.
    Overrides: object.__init__

    executeActions(self, configPath, options, config)

    source code 

    Executes all actions and extended actions, in the proper order.

    Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action functions.
    • config - Parsed configuration to be passed to action functions.
    Raises:
    • Exception - If there is a problem executing the actions.

    _deriveExtensionNames(extensions)
    Static Method

    source code 

    Builds a list of extended actions that are available in configuration.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    List of extended action names.

    _buildHookMaps(hooks)
    Static Method

    source code 

    Build two mappings from action name to configured ActionHook.

    Parameters:
    • hooks - List of pre- and post-action hooks (i.e. config.options.hooks)
    Returns:
    Tuple of (pre hook dictionary, post hook dictionary).

    _buildFunctionMap(extensions)
    Static Method

    source code 

    Builds a mapping from named action to action function.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action to function.

    _buildIndexMap(extensions)
    Static Method

    source code 

    Builds a mapping from action name to proper execution index.

    If extensions configuration is None, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices.

    Otherwise, if the extensions order mode is None or "index", actions will scheduled by explicit index; and if the extensions order mode is "dependency", actions will be scheduled using a dependency graph.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action name to integer execution index.

    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Static Method

    source code 

    Builds a mapping from action name to list of action items.

    We build either _ActionItem or _ManagedActionItem objects here.

    In most cases, the mapping from action name to _ActionItem is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each _ActionItem will be created with a proper function reference and index value for execution ordering.

    The mapping from action name to _ManagedActionItem is always 1:1. Each managed action item contains a list of peers which the action should be executed.

    Parameters:
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    • extensionNames - List of valid extended action names
    • functionMap - Dictionary mapping action name to Python function
    • indexMap - Dictionary mapping action name to integer execution index
    • preHookMap - Dictionary mapping action name to pre hooks (if any) for the action
    • postHookMap - Dictionary mapping action name to post hooks (if any) for the action
    • peerMap - Dictionary mapping action name to list of remote peers on which to execute the action
    Returns:
    Dictionary mapping action name to list of _ActionItem objects.

    _buildPeerMap(options, peers)
    Static Method

    source code 

    Build a mapping from action name to list of remote peers.

    There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping.

    Parameters:
    • options - Option configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)

    _deriveHooks(action, preHookDict, postHookDict)
    Static Method

    source code 

    Derive pre- and post-action hooks, if any, associated with named action.

    Parameters:
    • action - Name of action to look up
    • preHookDict - Dictionary mapping pre-action hooks to action name
    • postHookDict - Dictionary mapping post-action hooks to action name @return Tuple (preHook, postHook) per mapping, with None values if there is no hook.

    _validateActions(actions, extensionNames)
    Static Method

    source code 

    Validate that the set of specified actions is sensible.

    Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within NONCOMBINE_ACTIONS may not be combined with other actions.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensionNames - Names of extensions specified in configuration.
    Raises:
    • ValueError - If one or more configured actions are not valid.

    _buildActionSet(actions, actionMap)
    Static Method

    source code 

    Build set of actions to be executed.

    The set of actions is built in the proper order, so executeActions can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • actionMap - Dictionary mapping action name to _ActionItem object.
    Returns:
    Set of action items in proper order.

    _getRemoteUser(options, remotePeer)
    Static Method

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getRshCommand(options, remotePeer)
    Static Method

    source code 

    Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RSH command associated with remote peer.

    _getCbackCommand(options, remotePeer)
    Static Method

    source code 

    Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    cback command associated with remote peer.

    _getManagedActions(options, remotePeer)
    Static Method

    source code 

    Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Set of managed actions associated with remote peer.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.sysinfo-module.html0000664000175000017500000005743112143054362027642 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.sysinfo
    Package CedarBackup2 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Module sysinfo

    source code

    Provides an extension to save off important system recovery information.

    This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information:

    • Currently-installed Debian packages via dpkg --get-selections
    • Disk partition information via fdisk -l
    • System-wide mounted filesystem contents, via ls -laR

    The saved-off information is placed into the collect directory and is compressed using bzip2 to save space.

    This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple.


    Note: If the dpkg or fdisk commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level.

    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the sysinfo backup action.
    source code
     
    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)
    Dumps a list of currently installed Debian packages via dpkg.
    source code
     
    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)
    Dumps information about the partition table via fdisk.
    source code
     
    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)
    Dumps complete listing of filesystem contents via ls -laR.
    source code
     
    _getOutputFile(targetDir, name, compress=True)
    Opens the output file used for saving a dump to the filesystem.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.sysinfo")
      DPKG_PATH = '/usr/bin/dpkg'
      FDISK_PATH = '/sbin/fdisk'
      DPKG_COMMAND = ['/usr/bin/dpkg', '--get-selections']
      FDISK_COMMAND = ['/sbin/fdisk', '-l']
      LS_COMMAND = ['ls', '-laR', '/']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the sysinfo backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If the backup process fails for some reason.

    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps a list of currently installed Debian packages via dpkg.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps information about the partition table via fdisk.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps complete listing of filesystem contents via ls -laR.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _getOutputFile(targetDir, name, compress=True)

    source code 

    Opens the output file used for saving a dump to the filesystem.

    The filename will be name.txt (or name.txt.bz2 if compress is True), written in the target directory.

    Parameters:
    • targetDir - Target directory to write file in.
    • name - Name of the file to create.
    • compress - Indicates whether to write compressed output.
    Returns:
    Tuple of (Output file object, filename)

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.tools-module.html0000664000175000017500000001557112143054362026021 0ustar pronovicpronovic00000000000000 CedarBackup2.tools
    Package CedarBackup2 :: Package tools
    [hide private]
    [frames] | no frames]

    Package tools

    source code

    Official Cedar Backup Tools

    This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input).

    Tools are usually scripts that are run directly from the command line, just like the main cback script. Like the cback script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's cli() function. The actual scripts for tools are distributed in the util/ directory.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem-module.html0000664000175000017500000004260412143054362027042 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem
    Package CedarBackup2 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Module filesystem

    source code

    Provides filesystem-related objects.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      FilesystemList
    Represents a list of filesystem items.
      BackupFileList
    List of files to be backed up.
      PurgeItemList
    List of files and directories to be purged.
      SpanItem
    Item returned by BackupFileList.generateSpan.
    Functions [hide private]
     
    normalizeDir(path)
    Normalizes a directory name.
    source code
     
    compareContents(path1, path2, verbose=False)
    Compares the contents of two directories to see if they are equivalent.
    source code
     
    compareDigestMaps(digest1, digest2, verbose=False)
    Compares two digest maps and throws an exception if they differ.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.filesystem")
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    normalizeDir(path)

    source code 

    Normalizes a directory name.

    For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in /path/to/dir/ and /path/to/dir are equivalent.

    Parameters:
    • path (String representing a path on disk) - Path to be normalized.
    Returns:
    Normalized path, which should be equivalent to the original.

    compareContents(path1, path2, verbose=False)

    source code 

    Compares the contents of two directories to see if they are equivalent.

    The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories.

    This is all relatively simple to implement through the magic of BackupFileList.generateDigestMap, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two.

    If no exception is thrown, the two directories are considered identical.

    If the verbose flag is True, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown ValueError exception distinguishes between the directories containing different files, and containing the same files with differing content.

    Parameters:
    • path1 (String representing a path on disk) - First path to compare.
    • path2 (String representing a path on disk) - First path to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If a directory doesn't exist or can't be read.
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is an unusual problem reading the directories.

    Note: Symlinks are not followed for the purposes of this comparison.

    compareDigestMaps(digest1, digest2, verbose=False)

    source code 

    Compares two digest maps and throws an exception if they differ.

    Parameters:
    • digest1 (Digest as returned from BackupFileList.generateDigestMap()) - First digest to compare.
    • digest2 (Digest as returned from BackupFileList.generateDigestMap()) - Second digest to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If the two directories are not equivalent.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.LocalPeer-class.html0000664000175000017500000006637712143054362027625 0ustar pronovicpronovic00000000000000 CedarBackup2.config.LocalPeer
    Package CedarBackup2 :: Module config :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    Constructor for the LocalPeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, typically a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the LocalPeer class.

    Parameters:
    • name - Name of the peer, typically a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, typically a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.image-module.html0000664000175000017500000000211312143054362026512 0ustar pronovicpronovic00000000000000 image

    Module image


    Variables

    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.CollectDir-class.html0000664000175000017500000016025312143054362027767 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectDir
    Package CedarBackup2 :: Module config :: Class CollectDir
    [hide private]
    [frames] | no frames]

    Class CollectDir

    source code

    object --+
             |
            CollectDir
    

    Class representing a Cedar Backup collect directory.

    The following restrictions exist on data in this class:

    • Absolute paths must be absolute
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    Constructor for the CollectDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setLinkDepth(self, value)
    Property target used to set the link depth.
    source code
     
    _getLinkDepth(self)
    Property target used to get the action linkDepth.
    source code
     
    _setDereference(self, value)
    Property target used to set the dereference flag.
    source code
     
    _getDereference(self)
    Property target used to get the dereference flag.
    source code
     
    _setRecursionLevel(self, value)
    Property target used to set the recursionLevel.
    source code
     
    _getRecursionLevel(self)
    Property target used to get the action recursionLevel.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the directory to collect.
      collectMode
    Overridden collect mode for this directory.
      archiveMode
    Overridden archive mode for this directory.
      ignoreFile
    Overridden ignore file name for this directory.
      linkDepth
    Maximum at which soft links should be followed.
      dereference
    Whether to dereference links that are followed.
      absoluteExcludePaths
    List of absolute paths to exclude.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.
      recursionLevel
    Recursion level to use for recursive directory collection

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    (Constructor)

    source code 

    Constructor for the CollectDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to collect.
    • collectMode - Overridden collect mode for this directory.
    • archiveMode - Overridden archive mode for this directory.
    • ignoreFile - Overidden ignore file name for this directory.
    • linkDepth - Maximum at which soft links should be followed.
    • dereference - Whether to dereference links that are followed.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setLinkDepth(self, value)

    source code 

    Property target used to set the link depth. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDereference(self, value)

    source code 

    Property target used to set the dereference flag. No validations, but we normalize the value to True or False.

    _setRecursionLevel(self, value)

    source code 

    Property target used to set the recursionLevel. The value must be an integer.

    Raises:
    • ValueError - If the value is not valid.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path of the directory to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this directory.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Overridden ignore file name for this directory.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    linkDepth

    Maximum at which soft links should be followed.

    Get Method:
    _getLinkDepth(self) - Property target used to get the action linkDepth.
    Set Method:
    _setLinkDepth(self, value) - Property target used to set the link depth.

    dereference

    Whether to dereference links that are followed.

    Get Method:
    _getDereference(self) - Property target used to get the dereference flag.
    Set Method:
    _setDereference(self, value) - Property target used to set the dereference flag.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    recursionLevel

    Recursion level to use for recursive directory collection

    Get Method:
    _getRecursionLevel(self) - Property target used to get the action recursionLevel.
    Set Method:
    _setRecursionLevel(self, value) - Property target used to set the recursionLevel.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util._Vertex-class.html0000664000175000017500000002145312143054363027066 0ustar pronovicpronovic00000000000000 CedarBackup2.util._Vertex
    Package CedarBackup2 :: Module util :: Class _Vertex
    [hide private]
    [frames] | no frames]

    Class _Vertex

    source code

    object --+
             |
            _Vertex
    

    Represents a vertex (or node) in a directed graph.

    Instance Methods [hide private]
     
    __init__(self, name)
    Constructor.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Constructor.

    Parameters:
    • name (String value.) - Name of this graph vertex.
    Overrides: object.__init__

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.knapsack-module.html0000664000175000017500000000301312143054362027223 0ustar pronovicpronovic00000000000000 knapsack

    Module knapsack


    Functions

    alternateFit
    bestFit
    firstFit
    worstFit

    Variables

    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.util-module.html0000664000175000017500000025476712143054362025652 0ustar pronovicpronovic00000000000000 CedarBackup2.util
    Package CedarBackup2 :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides general-purpose utilities.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      AbsolutePathList
    Class representing a list of absolute paths.
      ObjectTypeList
    Class representing a list containing only objects with a certain type.
      RestrictedContentList
    Class representing a list containing only object with certain values.
      RegexMatchList
    Class representing a list containing only strings that match a regular expression.
      RegexList
    Class representing a list of valid regular expression strings.
      _Vertex
    Represents a vertex (or node) in a directed graph.
      DirectedGraph
    Represents a directed graph.
      PathResolverSingleton
    Singleton used for resolving executable paths.
      UnorderedList
    Class representing an "unordered list".
      Pipe
    Specialized pipe class for use by executeCommand.
      Diagnostics
    Class holding runtime diagnostic information.
    Functions [hide private]
     
    sortDict(d)
    Returns the keys of the dictionary sorted by value.
    source code
     
    convertSize(size, fromUnit, toUnit)
    Converts a size in one unit to a size in another unit.
    source code
     
    getUidGid(user, group)
    Get the uid/gid associated with a user/group pair
    source code
     
    changeOwnership(path, user, group)
    Changes ownership of path to match the user and group.
    source code
     
    splitCommandLine(commandLine)
    Splits a command line string into a list of arguments.
    source code
     
    resolveCommand(command)
    Resolves the real path to a command through the path resolver mechanism.
    source code
     
    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)
    Executes a shell command, hopefully in a safe way.
    source code
     
    calculateFileAge(path)
    Calculates the age (in days) of a file.
    source code
     
    encodePath(path)
    Safely encodes a filesystem path.
    source code
     
    nullDevice()
    Attempts to portably return the null device on this system.
    source code
     
    deriveDayOfWeek(dayName)
    Converts English day name to numeric day of week as from time.localtime.
    source code
     
    isStartOfWeek(startingDay)
    Indicates whether "today" is the backup starting day per configuration.
    source code
     
    buildNormalizedPath(path)
    Returns a "normalized" path based on a path name.
    source code
     
    removeKeys(d, keys)
    Removes all of the keys from the dictionary.
    source code
     
    displayBytes(bytes, digits=2)
    Format a byte quantity so it can be sensibly displayed.
    source code
     
    getFunctionReference(module, function)
    Gets a reference to a named function.
    source code
     
    isRunningAsRoot()
    Indicates whether the program is running as the root user.
    source code
     
    mount(devicePath, mountPoint, fsType)
    Mounts the indicated device at the indicated mount point.
    source code
     
    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)
    Unmounts whatever device is mounted at the indicated mount point.
    source code
     
    deviceMounted(devicePath)
    Indicates whether a specific filesystem device is currently mounted.
    source code
     
    sanitizeEnvironment()
    Sanitizes the operating system environment.
    source code
     
    dereferenceLink(path, absolute=True)
    Deference a soft link, optionally normalizing it to an absolute path.
    source code
     
    checkUnique(prefix, values)
    Checks that all values are unique.
    source code
     
    parseCommaSeparatedString(commaString)
    Parses a list of values out of a comma-separated string.
    source code
    Variables [hide private]
      ISO_SECTOR_SIZE = 2048.0
    Size of an ISO image sector, in bytes.
      BYTES_PER_SECTOR = 2048.0
    Number of bytes (B) per ISO sector.
      BYTES_PER_KBYTE = 1024.0
    Number of bytes (B) per kilobyte (kB).
      BYTES_PER_MBYTE = 1048576.0
    Number of bytes (B) per megabyte (MB).
      BYTES_PER_GBYTE = 1073741824.0
    Number of bytes (B) per megabyte (GB).
      KBYTES_PER_MBYTE = 1024.0
    Number of kilobytes (kB) per megabyte (MB).
      MBYTES_PER_GBYTE = 1024.0
    Number of megabytes (MB) per gigabyte (GB).
      SECONDS_PER_MINUTE = 60.0
    Number of seconds per minute.
      MINUTES_PER_HOUR = 60.0
    Number of minutes per hour.
      HOURS_PER_DAY = 24.0
    Number of hours per day.
      SECONDS_PER_DAY = 86400.0
    Number of seconds per day.
      UNIT_BYTES = 0
    Constant representing the byte (B) unit for conversion.
      UNIT_KBYTES = 1
    Constant representing the kilobyte (kB) unit for conversion.
      UNIT_MBYTES = 2
    Constant representing the megabyte (MB) unit for conversion.
      UNIT_GBYTES = 4
    Constant representing the gigabyte (GB) unit for conversion.
      UNIT_SECTORS = 3
    Constant representing the ISO sector unit for conversion.
      _UID_GID_AVAILABLE = True
      logger = logging.getLogger("CedarBackup2.log.util")
      outputLogger = logging.getLogger("CedarBackup2.output")
      MTAB_FILE = '/etc/mtab'
      MOUNT_COMMAND = ['mount']
      UMOUNT_COMMAND = ['umount']
      DEFAULT_LANGUAGE = 'C'
      LANG_VAR = 'LANG'
      LOCALE_VARS = ['LC_ADDRESS', 'LC_ALL', 'LC_COLLATE', 'LC_CTYPE...
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    sortDict(d)

    source code 

    Returns the keys of the dictionary sorted by value.

    There are cuter ways to do this in Python 2.4, but we were originally attempting to stay compatible with Python 2.3.

    Parameters:
    • d - Dictionary to operate on
    Returns:
    List of dictionary keys sorted in order by dictionary value.

    convertSize(size, fromUnit, toUnit)

    source code 

    Converts a size in one unit to a size in another unit.

    This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit.

    The available units are:

    • UNIT_BYTES - Bytes
    • UNIT_KBYTES - Kilobytes, where 1 kB = 1024 B
    • UNIT_MBYTES - Megabytes, where 1 MB = 1024 kB
    • UNIT_GBYTES - Gigabytes, where 1 GB = 1024 MB
    • UNIT_SECTORS - Sectors, where 1 sector = 2048 B
    Parameters:
    • size (Integer or float value in units of fromUnit) - Size to convert
    • fromUnit (One of the units listed above) - Unit to convert from
    • toUnit (One of the units listed above) - Unit to convert to
    Returns:
    Number converted to new unit, as a float.
    Raises:
    • ValueError - If one of the units is invalid.

    getUidGid(user, group)

    source code 

    Get the uid/gid associated with a user/group pair

    This is a no-op if user/group functionality is not available on the platform.

    Parameters:
    • user (User name as a string) - User name
    • group (Group name as a string) - Group name
    Returns:
    Tuple (uid, gid) matching passed-in user and group.
    Raises:
    • ValueError - If the ownership user/group values are invalid

    changeOwnership(path, user, group)

    source code 

    Changes ownership of path to match the user and group.

    This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is None. Further, we won't even try to do it unless running as root, since it's unlikely to work.

    Parameters:
    • path - Path whose ownership to change.
    • user - User which owns file.
    • group - Group which owns file.

    splitCommandLine(commandLine)

    source code 

    Splits a command line string into a list of arguments.

    Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (") for grouping arguments, not single quotes ('). Make sure you take this into account when building your command line.

    Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use.

    Parameters:
    • commandLine (String, i.e. "cback --verbose stage store") - Command line string
    Returns:
    List of arguments, suitable for passing to popen2.
    Raises:
    • ValueError - If the command line is None.

    resolveCommand(command)

    source code 

    Resolves the real path to a command through the path resolver mechanism.

    Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location.

    Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called PathResolverSingleton) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command.

    The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like ["svnlook", ]). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well.

    If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative.

    Parameters:
    • command (List form of command, i.e. ["svnlook", ].) - Command to resolve.
    Returns:
    Path to command or just command itself if no mapping exists.

    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)

    source code 

    Executes a shell command, hopefully in a safe way.

    This function exists to replace direct calls to os.popen in the Cedar Backup code. It's not safe to call a function such as os.popen() with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is $WHATEVER, but $WHATEVER contains something like "; rm -fR ~/; echo" in the current environment).

    Instead, it's safer to pass a list of arguments in the style supported bt popen2 or popen4. This function actually uses a specialized Pipe class implemented using either subprocess.Popen or popen2.Popen4.

    Under the normal case, this function will return a tuple of (status, None) where the status is the wait-encoded return status of the call per the popen2.Popen4 documentation. If returnOutput is passed in as True, the function will return a tuple of (status, output) where output is a list of strings, one entry per line in the output from the command. Output is always logged to the outputLogger.info() target, regardless of whether it's returned.

    By default, stdout and stderr will be intermingled in the output. However, if you pass in ignoreStderr=True, then only stdout will be included in the output.

    The doNotLog parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to stdout) then you might want to avoid putting all that information into the debug log.

    The outputFile parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using outputFile.write(). At the end, the file descriptor will be flushed using outputFile.flush(). The caller maintains responsibility for closing the file object appropriately.

    Parameters:
    • command (List of individual arguments that make up the command) - Shell command to execute
    • args (List of additional arguments to the command) - List of arguments to the command
    • returnOutput (Boolean True or False) - Indicates whether to return the output of the command
    • ignoreStderr (Boolean True or False) - Whether stderr should be discarded
    • doNotLog (Boolean True or False) - Indicates that output should not be logged.
    • outputFile (File object as returned from open() or file().) - File object that all output should be written to.
    Returns:
    Tuple of (result, output) as described above.
    Notes:
    • I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. "scp -B") separate from its arguments, even if they both end up looking kind of similar.
    • You cannot redirect output via shell constructs (i.e. >file, 2>/dev/null, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using ignoreStderr and outputFile, as discussed above.
    • The operating system environment is partially sanitized before the command is invoked. See sanitizeEnvironment for details.

    calculateFileAge(path)

    source code 

    Calculates the age (in days) of a file.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem.

    Parameters:
    • path - Path to a file on disk.
    Returns:
    Age of the file in days (possibly fractional).
    Raises:
    • OSError - If the file doesn't exist.

    encodePath(path)

    source code 

    Safely encodes a filesystem path.

    Many Python filesystem functions, such as os.listdir, behave differently if they are passed unicode arguments versus simple string arguments. For instance, os.listdir generally returns unicode path names if it is passed a unicode argument, and string pathnames if it is passed a string argument.

    However, this behavior often isn't as consistent as we might like. As an example, os.listdir "gives up" if it finds a filename that it can't properly encode given the current locale settings. This means that the returned list is a mixed set of unicode and simple string paths. This has consequences later, because other filesystem functions like os.path.join will blow up if they are given one string path and one unicode path.

    On comp.lang.python, Martin v. Löwis explained the os.listdir behavior like this:

      The operating system (POSIX) does not have the inherent notion that file
      names are character strings. Instead, in POSIX, file names are primarily
      byte strings. There are some bytes which are interpreted as characters
      (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from
      that, most OS layers think these are just bytes.
    
      Now, most *people* think that file names are character strings.  To
      interpret a file name as a character string, you need to know what the
      encoding is to interpret the file names (which are byte strings) as
      character strings.
    
      There is, unfortunately, no operating system API to carry the notion of a
      file system encoding. By convention, the locale settings should be used
      to establish this encoding, in particular the LC_CTYPE facet of the
      locale. This is defined in the environment variables LC_CTYPE, LC_ALL,
      and LANG (searched in this order).
    
      If LANG is not set, the "C" locale is assumed, which uses ASCII as its
      file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a
      valid file name (at least it cannot be interpreted as characters, and
      hence not be converted to Unicode).
    
      Now, your Python script has requested that all file names *should* be
      returned as character (ie. Unicode) strings, but Python cannot comply,
      since there is no way to find out what this byte string means, in terms
      of characters.
    
      So we have three options:
    
      1. Skip this string, only return the ones that can be converted to Unicode. 
         Give the user the impression the file does not exist.
      2. Return the string as a byte string
      3. Refuse to listdir altogether, raising an exception (i.e. return nothing)
    
      Python has chosen alternative 2, allowing the application to implement 1
      or 3 on top of that if it wants to (or come up with other strategies,
      such as user feedback).
    

    As a solution, he suggests that rather than passing unicode paths into the filesystem functions, that I should sensibly encode the path first. That is what this function accomplishes. Any function which takes a filesystem path as an argument should encode it first, before using it for any other purpose.

    I confess I still don't completely understand how this works. On a system with filesystem encoding "ISO-8859-1", a path u"\xe2\x99\xaa\xe2\x99\xac" is converted into the string "\xe2\x99\xaa\xe2\x99\xac". However, on a system with a "utf-8" encoding, the result is a completely different string: "\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac". A quick test where I write to the first filename and open the second proves that the two strings represent the same file on disk, which is all I really care about.

    Parameters:
    • path - Path to encode
    Returns:
    Path, as a string, encoded appropriately
    Raises:
    • ValueError - If the path cannot be encoded properly.
    Notes:
    • As a special case, if path is None, then this function will return None.
    • To provide several examples of encoding values, my Debian sarge box with an ext3 filesystem has Python filesystem encoding ISO-8859-1. User Anarcat's Debian box with a xfs filesystem has filesystem encoding ANSI_X3.4-1968. Both my iBook G4 running Mac OS X 10.4 and user Dag Rende's SuSE 9.3 box both have filesystem encoding UTF-8.
    • Just because a filesystem has UTF-8 encoding doesn't mean that it will be able to handle all extended-character filenames. For instance, certain extended-character (but not UTF-8) filenames -- like the ones in the regression test tar file test/data/tree13.tar.gz -- are not valid under Mac OS X, and it's not even possible to extract them from the tarfile on that platform.

    nullDevice()

    source code 

    Attempts to portably return the null device on this system.

    The null device is something like /dev/null on a UNIX system. The name varies on other platforms.

    deriveDayOfWeek(dayName)

    source code 

    Converts English day name to numeric day of week as from time.localtime.

    For instance, the day monday would be converted to the number 0.

    Parameters:
    • dayName (string, i.e. "monday", "tuesday", etc.) - Day of week to convert
    Returns:
    Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible.

    isStartOfWeek(startingDay)

    source code 

    Indicates whether "today" is the backup starting day per configuration.

    If the current day's English name matches the indicated starting day, then today is a starting day.

    Parameters:
    • startingDay (string, i.e. "monday", "tuesday", etc.) - Configured starting day.
    Returns:
    Boolean indicating whether today is the starting day.

    buildNormalizedPath(path)

    source code 

    Returns a "normalized" path based on a path name.

    A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading '.' character (which would cause the file to be hidden in a file listing).

    Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path.

    To normalize a path, we begin by looking at the first character. If the first character is '/' or '\', it gets removed. If the first character is '.', it gets converted to '_'. Then, we look through the rest of the path and convert all remaining '/' or '\' characters '-', and all remaining whitespace characters to '_'.

    As a special case, a path consisting only of a single '/' or '\' character will be converted to '-'.

    Parameters:
    • path - Path to normalize
    Returns:
    Normalized path as described above.
    Raises:
    • ValueError - If the path is None

    removeKeys(d, keys)

    source code 

    Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary.

    Parameters:
    • d - Dictionary to operate on
    • keys - List of keys to remove
    Raises:
    • KeyError - If one of the keys does not exist

    displayBytes(bytes, digits=2)

    source code 

    Format a byte quantity so it can be sensibly displayed.

    It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:

      print "Size: %s bytes" % bytes
    

    Call this function instead:

      print "Size: %s" % displayBytes(bytes)
    

    What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard %f string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.)

    Parameters:
    • bytes (Integer number of bytes.) - Byte quantity.
    • digits (Integer value, typically 2-5.) - Number of digits to display after the decimal point.
    Returns:
    String, formatted for sensible display.

    getFunctionReference(module, function)

    source code 

    Gets a reference to a named function.

    This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the os.path.isdir function. You could use:

      myfunc = getFunctionReference("os.path", "isdir")
    

    Although we won't bomb out directly, behavior is pretty much undefined if you pass in None or "" for either module or function.

    The only validation we enforce is that whatever we get back must be callable.

    I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works.

    Parameters:
    • module (Something like "os.path" or "CedarBackup2.util") - Name of module associated with function.
    • function (Something like "isdir" or "getUidGid") - Name of function
    Returns:
    Reference to function associated with name.
    Raises:
    • ImportError - If the function cannot be found.
    • ValueError - If the resulting reference is not callable.

    Copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved.

    mount(devicePath, mountPoint, fsType)

    source code 

    Mounts the indicated device at the indicated mount point.

    For instance, to mount a CD, you might use device path /dev/cdrw, mount point /media/cdrw and filesystem type iso9660. You can safely use any filesystem type that is supported by mount on your platform. If the type is None, we'll attempt to let mount auto-detect it. This may or may not work on all systems.

    Parameters:
    • devicePath - Path of device to be mounted.
    • mountPoint - Path that device should be mounted at.
    • fsType - Type of the filesystem assumed to be available via the device.
    Raises:
    • IOError - If the device cannot be mounted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)

    source code 

    Unmounts whatever device is mounted at the indicated mount point.

    Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the attempts and waitSeconds arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh).

    If the indicated mount point is not really a mount point per os.path.ismount(), then it will be ignored. This seems to be a safer check then looking through /etc/mtab, since ismount() is already in the Python standard library and is documented as working on all POSIX systems.

    If removeAfter is True, then the mount point will be removed using os.rmdir() after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed.

    Parameters:
    • mountPoint - Mount point to be unmounted.
    • removeAfter - Remove the mount point after unmounting it.
    • attempts - Number of times to attempt the unmount.
    • waitSeconds - Number of seconds to wait between repeated attempts.
    Raises:
    • IOError - If the mount point is still mounted after attempts are exhausted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    deviceMounted(devicePath)

    source code 

    Indicates whether a specific filesystem device is currently mounted.

    We determine whether the device is mounted by looking through the system's mtab file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the mtab file exists and is readable. Otherwise, we assume that the device is not mounted.

    Parameters:
    • devicePath - Path of device to be checked
    Returns:
    True if device is mounted, false otherwise.

    Note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows.

    sanitizeEnvironment()

    source code 

    Sanitizes the operating system environment.

    The operating system environment is contained in os.environ. This method sanitizes the contents of that dictionary.

    Currently, all it does is reset the locale (removing $LC_*) and set the default language ($LANG) to DEFAULT_LANGUAGE. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output.

    The os.environ dictionary is modifed in-place. If $LANG is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems.

    Returns:
    Copy of the sanitized environment.

    dereferenceLink(path, absolute=True)

    source code 

    Deference a soft link, optionally normalizing it to an absolute path.

    Parameters:
    • path - Path of link to dereference
    • absolute - Whether to normalize the result to an absolute path
    Returns:
    Dereferenced path, or original path if original is not a link.

    checkUnique(prefix, values)

    source code 

    Checks that all values are unique.

    The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception.

    Parameters:
    • prefix - Prefix to use in the thrown exception
    • values - List of values to check
    Raises:
    • ValueError - If there are duplicates in the list

    parseCommaSeparatedString(commaString)

    source code 

    Parses a list of values out of a comma-separated string.

    The items in the list are split by comma, and then have whitespace stripped. As a special case, if commaString is None, then None will be returned.

    Parameters:
    • commaString - List of values in comma-separated string format.
    Returns:
    Values from commaString split into a list, or None.

    Variables Details [hide private]

    LOCALE_VARS

    Value:
    ['LC_ADDRESS',
     'LC_ALL',
     'LC_COLLATE',
     'LC_CTYPE',
     'LC_IDENTIFICATION',
     'LC_MEASUREMENT',
     'LC_MESSAGES',
     'LC_MONETARY',
    ...
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.stage-module.html0000664000175000017500000006404212143054362027420 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.stage
    Package CedarBackup2 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Module stage

    source code

    Implements the standard 'stage' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeStage(configPath, options, config)
    Executes the stage backup action.
    source code
     
    _createStagingDirs(config, dailyDir, peers)
    Creates staging directories as required.
    source code
     
    _getIgnoreFailuresFlag(options, config, peer)
    Gets the ignore failures flag based on options, configuration, and peer.
    source code
     
    _getDailyDir(config)
    Gets the daily staging directory.
    source code
     
    _getLocalPeers(config)
    Return a list of LocalPeer objects based on configuration.
    source code
     
    _getRemotePeers(config)
    Return a list of RemotePeer objects based on configuration.
    source code
     
    _getRemoteUser(config, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getLocalUser(config)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRcpCommand(config, remotePeer)
    Gets the RCP command associated with a remote peer.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.stage")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeStage(configPath, options, config)

    source code 

    Executes the stage backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite.
    • As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only.

    _createStagingDirs(config, dailyDir, peers)

    source code 

    Creates staging directories as required.

    The main staging directory is the passed in daily directory, something like staging/2002/05/23. Then, individual peers get their own directories, i.e. staging/2002/05/23/host.

    Parameters:
    • config - Config object.
    • dailyDir - Daily staging directory.
    • peers - List of all configured peers.
    Returns:
    Dictionary mapping peer name to staging directory.

    _getIgnoreFailuresFlag(options, config, peer)

    source code 

    Gets the ignore failures flag based on options, configuration, and peer.

    Parameters:
    • options - Options object
    • config - Configuration object
    • peer - Peer to check
    Returns:
    Whether to ignore stage failures for this peer

    _getDailyDir(config)

    source code 

    Gets the daily staging directory.

    This is just a directory in the form staging/YYYY/MM/DD, i.e. staging/2000/10/07, except it will be an absolute path based on config.stage.targetDir.

    Parameters:
    • config - Config object
    Returns:
    Path of daily staging directory.

    _getLocalPeers(config)

    source code 

    Return a list of LocalPeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of LocalPeer objects.

    _getRemotePeers(config)

    source code 

    Return a list of RemotePeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of RemotePeer objects.

    _getRemoteUser(config, remotePeer)

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getLocalUser(config)

    source code 

    Gets the remote user associated with a remote peer.

    Parameters:
    • config - Config object.
    Returns:
    Name of local user that should be used

    _getRcpCommand(config, remotePeer)

    source code 

    Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RCP command associated with remote peer.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.split-module.html0000664000175000017500000000411412143054362030054 0ustar pronovicpronovic00000000000000 split

    Module split


    Classes

    LocalConfig
    SplitConfig

    Functions

    executeAction

    Variables

    SPLIT_COMMAND
    SPLIT_INDICATOR
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend-pysrc.html0000664000175000017500000002403112143054365026015 0ustar pronovicpronovic00000000000000 CedarBackup2.extend
    Package CedarBackup2 :: Package extend
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.extend

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides package initialization 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Official Cedar Backup Extensions 
    25   
    26  This package provides official Cedar Backup extensions.  These are Cedar Backup 
    27  actions that are not part of the "standard" set of Cedar Backup actions, but 
    28  are officially supported along with Cedar Backup. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33   
    34  ######################################################################## 
    35  # Package initialization 
    36  ######################################################################## 
    37   
    38  # Using 'from CedarBackup2.extend import *' will just import the modules listed 
    39  # in the __all__ variable. 
    40   
    41  __all__ = [ 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] 
    42   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.DirectedGraph-class.html0000664000175000017500000006573612143054363030173 0ustar pronovicpronovic00000000000000 CedarBackup2.util.DirectedGraph
    Package CedarBackup2 :: Module util :: Class DirectedGraph
    [hide private]
    [frames] | no frames]

    Class DirectedGraph

    source code

    object --+
             |
            DirectedGraph
    

    Represents a directed graph.

    A graph G=(V,E) consists of a set of vertices V together with a set E of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext v1 to vertex v2). A DirectedGraph object provides a way to construct a directed graph and execute a depth- first search.

    This data structure was designed based on the graphing chapter in The Algorithm Design Manual, by Steven S. Skiena.

    This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them.

    Instance Methods [hide private]
     
    __init__(self, name)
    Directed graph constructor.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _getName(self)
    Property target used to get the graph name.
    source code
     
    createVertex(self, name)
    Creates a named vertex.
    source code
     
    createEdge(self, start, finish)
    Adds an edge with an associated direction, from start vertex to finish vertex.
    source code
     
    topologicalSort(self)
    Implements a topological sort of the graph.
    source code
     
    _topologicalSort(self, vertex, ordering)
    Recursive depth first search function implementing topological sort.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Class Variables [hide private]
      _UNDISCOVERED = 0
      _DISCOVERED = 1
      _EXPLORED = 2
    Properties [hide private]
      name
    Name of the graph.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Directed graph constructor.

    Parameters:
    • name (String value.) - Name of this graph.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    createVertex(self, name)

    source code 

    Creates a named vertex.

    Parameters:
    • name - vertex name
    Raises:
    • ValueError - If the vertex name is None or empty.

    createEdge(self, start, finish)

    source code 

    Adds an edge with an associated direction, from start vertex to finish vertex.

    Parameters:
    • start - Name of start vertex.
    • finish - Name of finish vertex.
    Raises:
    • ValueError - If one of the named vertices is unknown.

    topologicalSort(self)

    source code 

    Implements a topological sort of the graph.

    This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort.

    A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort.

    Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found.

    A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices.

    Returns:
    Ordering on the vertices so that all edges go from left to right.
    Raises:
    • ValueError - If a cycle is found in the graph.

    Note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will).

    _topologicalSort(self, vertex, ordering)

    source code 

    Recursive depth first search function implementing topological sort.

    Parameters:
    • vertex - Vertex to search
    • ordering - List of vertices in proper order

    Property Details [hide private]

    name

    Name of the graph.

    Get Method:
    _getName(self) - Property target used to get the graph name.

    CedarBackup2-2.22.0/doc/interface/toc.html0000664000175000017500000002275112143054362021734 0ustar pronovicpronovic00000000000000 Table of Contents

    Table of Contents


    Everything

    Modules

    CedarBackup2
    CedarBackup2.action
    CedarBackup2.actions
    CedarBackup2.actions.collect
    CedarBackup2.actions.constants
    CedarBackup2.actions.initialize
    CedarBackup2.actions.purge
    CedarBackup2.actions.rebuild
    CedarBackup2.actions.stage
    CedarBackup2.actions.store
    CedarBackup2.actions.util
    CedarBackup2.actions.validate
    CedarBackup2.cli
    CedarBackup2.config
    CedarBackup2.customize
    CedarBackup2.extend
    CedarBackup2.extend.capacity
    CedarBackup2.extend.encrypt
    CedarBackup2.extend.mbox
    CedarBackup2.extend.mysql
    CedarBackup2.extend.postgresql
    CedarBackup2.extend.split
    CedarBackup2.extend.subversion
    CedarBackup2.extend.sysinfo
    CedarBackup2.filesystem
    CedarBackup2.image
    CedarBackup2.knapsack
    CedarBackup2.peer
    CedarBackup2.release
    CedarBackup2.testutil
    CedarBackup2.tools
    CedarBackup2.tools.span
    CedarBackup2.util
    CedarBackup2.writer
    CedarBackup2.writers
    CedarBackup2.writers.cdwriter
    CedarBackup2.writers.dvdwriter
    CedarBackup2.writers.util
    CedarBackup2.xmlutil

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.encrypt-module.html0000664000175000017500000005725512143054362027640 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt
    Package CedarBackup2 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Module encrypt

    source code

    Provides an extension to encrypt staging directories.

    When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the cback.encrypt file) will be ignored.

    This extension requires a new configuration section <encrypt> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      EncryptConfig
    Class representing encrypt configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the encrypt backup action.
    source code
     
    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)
    Encrypts the contents of a daily staging directory.
    source code
     
    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)
    Encrypts the source file using the indicated mode.
    source code
     
    _encryptFileWithGpg(sourcePath, recipient)
    Encrypts the indicated source file using GPG.
    source code
     
    _confirmGpgRecipient(recipient)
    Confirms that a recipient's public key is known to GPG.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.encrypt")
      GPG_COMMAND = ['gpg']
      VALID_ENCRYPT_MODES = ['gpg']
      ENCRYPT_INDICATOR = 'cback.encrypt'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the encrypt backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)

    source code 

    Encrypts the contents of a daily staging directory.

    Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is "gpg".

    Parameters:
    • dailyDir - Daily directory to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient for "gpg" mode)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)

    source code 

    Encrypts the source file using the indicated mode.

    The encrypted file will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully encrypted.

    Currently, only the "gpg" encrypt mode is supported.

    Parameters:
    • sourcePath - Absolute path of the source file to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • ValueError - If an invalid encrypt mode is passed in.
    • IOError - If there is a problem accessing, encrypting or removing the source file.

    _encryptFileWithGpg(sourcePath, recipient)

    source code 

    Encrypts the indicated source file using GPG.

    The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a ".gpg" extension. The source file will not be modified or removed by this function call.

    Parameters:
    • sourcePath - Absolute path of file to be encrypted.
    • recipient - Recipient name to be passed to GPG's "-r" option
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • IOError - If there is a problem encrypting the file.

    _confirmGpgRecipient(recipient)

    source code 

    Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise.

    Parameters:
    • recipient - Recipient name
    Raises:
    • IOError - If the recipient's public key is not known to GPG.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.collect-module.html0000664000175000017500000000665612143054362030534 0ustar pronovicpronovic00000000000000 collect

    Module collect


    Functions

    executeCollect

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.PurgeConfig-class.html0000664000175000017500000004621112143054362030150 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PurgeConfig
    Package CedarBackup2 :: Module config :: Class PurgeConfig
    [hide private]
    [frames] | no frames]

    Class PurgeConfig

    source code

    object --+
             |
            PurgeConfig
    

    Class representing a Cedar Backup purge configuration.

    The following restrictions exist on data in this class:

    • The purge directory list must be a list of PurgeDir objects.

    For the purgeDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is a PurgeDir.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, purgeDirs=None)
    Constructor for the Purge class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setPurgeDirs(self, value)
    Property target used to set the purge dirs list.
    source code
     
    _getPurgeDirs(self)
    Property target used to get the purge dirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      purgeDirs
    List of directories to purge.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, purgeDirs=None)
    (Constructor)

    source code 

    Constructor for the Purge class.

    Parameters:
    • purgeDirs - List of purge directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setPurgeDirs(self, value)

    source code 

    Property target used to set the purge dirs list. Either the value must be None or each element must be a PurgeDir.

    Raises:
    • ValueError - If the value is not a PurgeDir

    Property Details [hide private]

    purgeDirs

    List of directories to purge.

    Get Method:
    _getPurgeDirs(self) - Property target used to get the purge dirs list.
    Set Method:
    _setPurgeDirs(self, value) - Property target used to set the purge dirs list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.encrypt.LocalConfig-class.html0000664000175000017500000007463512143054363031641 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt.LocalConfig
    Package CedarBackup2 :: Package extend :: Module encrypt :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <encrypt> configuration section as the next child of a parent.
    source code
     
    _setEncrypt(self, value)
    Property target used to set the encrypt configuration value.
    source code
     
    _getEncrypt(self)
    Property target used to get the encrypt configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseEncrypt(parent)
    Parses an encrypt configuration section.
    source code
    Properties [hide private]
      encrypt
    Encrypt configuration in terms of a EncryptConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <encrypt> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setEncrypt(self, value)

    source code 

    Property target used to set the encrypt configuration value. If not None, the value must be a EncryptConfig object.

    Raises:
    • ValueError - If the value is not a EncryptConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the encrypt configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseEncrypt(parent)
    Static Method

    source code 

    Parses an encrypt configuration section.

    We read the following individual fields:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    encrypt

    Encrypt configuration in terms of a EncryptConfig object.

    Get Method:
    _getEncrypt(self) - Property target used to get the encrypt configuration value.
    Set Method:
    _setEncrypt(self, value) - Property target used to set the encrypt configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.encrypt.EncryptConfig-class.html0000664000175000017500000005465112143054363032227 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt.EncryptConfig
    Package CedarBackup2 :: Package extend :: Module encrypt :: Class EncryptConfig
    [hide private]
    [frames] | no frames]

    Class EncryptConfig

    source code

    object --+
             |
            EncryptConfig
    

    Class representing encrypt configuration.

    Encrypt configuration is used for encrypting staging directories.

    The following restrictions exist on data in this class:

    • The encrypt mode must be one of the values in VALID_ENCRYPT_MODES
    • The encrypt target value must be a non-empty string
    Instance Methods [hide private]
     
    __init__(self, encryptMode=None, encryptTarget=None)
    Constructor for the EncryptConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setEncryptMode(self, value)
    Property target used to set the encrypt mode.
    source code
     
    _getEncryptMode(self)
    Property target used to get the encrypt mode.
    source code
     
    _setEncryptTarget(self, value)
    Property target used to set the encrypt target.
    source code
     
    _getEncryptTarget(self)
    Property target used to get the encrypt target.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      encryptMode
    Encrypt mode.
      encryptTarget
    Encrypt target (i.e.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, encryptMode=None, encryptTarget=None)
    (Constructor)

    source code 

    Constructor for the EncryptConfig class.

    Parameters:
    • encryptMode - Encryption mode
    • encryptTarget - Encryption target (for instance, GPG recipient)
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setEncryptMode(self, value)

    source code 

    Property target used to set the encrypt mode. If not None, the mode must be one of the values in VALID_ENCRYPT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    encryptMode

    Encrypt mode.

    Get Method:
    _getEncryptMode(self) - Property target used to get the encrypt mode.
    Set Method:
    _setEncryptMode(self, value) - Property target used to set the encrypt mode.

    encryptTarget

    Encrypt target (i.e. GPG recipient).

    Get Method:
    _getEncryptTarget(self) - Property target used to get the encrypt target.
    Set Method:
    _setEncryptTarget(self, value) - Property target used to set the encrypt target.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.CollectFile-class.html0000664000175000017500000006703312143054362030132 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectFile
    Package CedarBackup2 :: Module config :: Class CollectFile
    [hide private]
    [frames] | no frames]

    Class CollectFile

    source code

    object --+
             |
            CollectFile
    

    Class representing a Cedar Backup collect file.

    The following restrictions exist on data in this class:

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    Constructor for the CollectFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the file to collect.
      collectMode
    Overridden collect mode for this file.
      archiveMode
    Overridden archive mode for this file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    (Constructor)

    source code 

    Constructor for the CollectFile class.

    Parameters:
    • absolutePath - Absolute path of the file to collect.
    • collectMode - Overridden collect mode for this file.
    • archiveMode - Overridden archive mode for this file.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of the file to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this file.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    CedarBackup2-2.22.0/doc/interface/identifier-index.html0000664000175000017500000146532712143054362024411 0ustar pronovicpronovic00000000000000 Identifier Index
     
    [hide private]
    [frames] | no frames]

    Identifier Index

    [ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ ]

    A

    B

    C

    D

    E

    F

    G

    H

    I

    K

    L

    M

    N

    O

    P

    Q

    R

    S

    T

    U

    V

    W

    X

    _



    CedarBackup2-2.22.0/doc/interface/CedarBackup2.customize-pysrc.html0000664000175000017500000006600712143054364026560 0ustar pronovicpronovic00000000000000 CedarBackup2.customize
    Package CedarBackup2 :: Module customize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.customize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2010 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python (>= 2.5) 
    29  # Project  : Cedar Backup, release 2 
    30  # Revision : $Id: customize.py 998 2010-07-07 19:56:08Z pronovic $ 
    31  # Purpose  : Implements customized behavior. 
    32  # 
    33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    34   
    35  ######################################################################## 
    36  # Module documentation 
    37  ######################################################################## 
    38   
    39  """ 
    40  Implements customized behavior. 
    41   
    42  Some behaviors need to vary when packaged for certain platforms.  For instance, 
    43  while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible 
    44  utilities called wodim and genisoimage. I want there to be one single place 
    45  where Cedar Backup is patched for Debian, rather than having to maintain a 
    46  variety of patches in different places. 
    47   
    48  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    49  """ 
    50   
    51  ######################################################################## 
    52  # Imported modules 
    53  ######################################################################## 
    54   
    55  # System modules 
    56  import logging 
    57   
    58   
    59  ######################################################################## 
    60  # Module-wide constants and variables 
    61  ######################################################################## 
    62   
    63  logger = logging.getLogger("CedarBackup2.log.customize") 
    64   
    65  PLATFORM = "standard" 
    66  #PLATFORM = "debian" 
    67   
    68  DEBIAN_CDRECORD = "/usr/bin/wodim" 
    69  DEBIAN_MKISOFS = "/usr/bin/genisoimage" 
    70   
    71   
    72  ####################################################################### 
    73  # Public functions 
    74  ####################################################################### 
    75   
    76  ################################ 
    77  # customizeOverrides() function 
    78  ################################ 
    79   
    
    80 -def customizeOverrides(config, platform=PLATFORM):
    81 """ 82 Modify command overrides based on the configured platform. 83 84 On some platforms, we want to add command overrides to configuration. Each 85 override will only be added if the configuration does not already contain an 86 override with the same name. That way, the user still has a way to choose 87 their own version of the command if they want. 88 89 @param config: Configuration to modify 90 @param platform: Platform that is in use 91 """ 92 if platform == "debian": 93 logger.info("Overriding cdrecord for Debian platform: %s" % DEBIAN_CDRECORD) 94 config.options.addOverride("cdrecord", DEBIAN_CDRECORD) 95 logger.info("Overriding mkisofs for Debian platform: %s" % DEBIAN_MKISOFS) 96 config.options.addOverride("mkisofs", DEBIAN_MKISOFS)
    97

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.writers-module.html0000664000175000017500000000216212143054362027133 0ustar pronovicpronovic00000000000000 writers

    Module writers


    Variables


    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ReferenceConfig-class.html0000664000175000017500000007357512143054362031001 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ReferenceConfig
    Package CedarBackup2 :: Module config :: Class ReferenceConfig
    [hide private]
    [frames] | no frames]

    Class ReferenceConfig

    source code

    object --+
             |
            ReferenceConfig
    

    Class representing a Cedar Backup reference configuration.

    The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x.

    Instance Methods [hide private]
     
    __init__(self, author=None, revision=None, description=None, generator=None)
    Constructor for the ReferenceConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAuthor(self, value)
    Property target used to set the author value.
    source code
     
    _getAuthor(self)
    Property target used to get the author value.
    source code
     
    _setRevision(self, value)
    Property target used to set the revision value.
    source code
     
    _getRevision(self)
    Property target used to get the revision value.
    source code
     
    _setDescription(self, value)
    Property target used to set the description value.
    source code
     
    _getDescription(self)
    Property target used to get the description value.
    source code
     
    _setGenerator(self, value)
    Property target used to set the generator value.
    source code
     
    _getGenerator(self)
    Property target used to get the generator value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      author
    Author of the configuration file.
      revision
    Revision of the configuration file.
      description
    Description of the configuration file.
      generator
    Tool that generated the configuration file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, author=None, revision=None, description=None, generator=None)
    (Constructor)

    source code 

    Constructor for the ReferenceConfig class.

    Parameters:
    • author - Author of the configuration file.
    • revision - Revision of the configuration file.
    • description - Description of the configuration file.
    • generator - Tool that generated the configuration file.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAuthor(self, value)

    source code 

    Property target used to set the author value. No validations.

    _setRevision(self, value)

    source code 

    Property target used to set the revision value. No validations.

    _setDescription(self, value)

    source code 

    Property target used to set the description value. No validations.

    _setGenerator(self, value)

    source code 

    Property target used to set the generator value. No validations.


    Property Details [hide private]

    author

    Author of the configuration file.

    Get Method:
    _getAuthor(self) - Property target used to get the author value.
    Set Method:
    _setAuthor(self, value) - Property target used to set the author value.

    revision

    Revision of the configuration file.

    Get Method:
    _getRevision(self) - Property target used to get the revision value.
    Set Method:
    _setRevision(self, value) - Property target used to set the revision value.

    description

    Description of the configuration file.

    Get Method:
    _getDescription(self) - Property target used to get the description value.
    Set Method:
    _setDescription(self, value) - Property target used to set the description value.

    generator

    Tool that generated the configuration file.

    Get Method:
    _getGenerator(self) - Property target used to get the generator value.
    Set Method:
    _setGenerator(self, value) - Property target used to set the generator value.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.sysinfo-module.html0000664000175000017500000000500312143054362030411 0ustar pronovicpronovic00000000000000 sysinfo

    Module sysinfo


    Functions

    executeAction

    Variables

    DPKG_COMMAND
    DPKG_PATH
    FDISK_COMMAND
    FDISK_PATH
    LS_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers-pysrc.html0000664000175000017500000002317412143054364026233 0ustar pronovicpronovic00000000000000 CedarBackup2.writers
    Package CedarBackup2 :: Package writers
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.writers

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides package initialization 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Cedar Backup writers. 
    25   
    26  This package consolidates all of the modules that implenent "image writer" 
    27  functionality, including utilities and specific writer implementations. 
    28   
    29  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    30  """ 
    31   
    32   
    33  ######################################################################## 
    34  # Package initialization 
    35  ######################################################################## 
    36   
    37  # Using 'from CedarBackup2.writers import *' will just import the modules listed 
    38  # in the __all__ variable. 
    39   
    40  __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] 
    41   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.capacity.CapacityConfig-class.html0000664000175000017500000005636612143054362032455 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.CapacityConfig
    Package CedarBackup2 :: Package extend :: Module capacity :: Class CapacityConfig
    [hide private]
    [frames] | no frames]

    Class CapacityConfig

    source code

    object --+
             |
            CapacityConfig
    

    Class representing capacity configuration.

    The following restrictions exist on data in this class:

    • The maximum percentage utilized must be a PercentageQuantity
    • The minimum bytes remaining must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, maxPercentage=None, minBytes=None)
    Constructor for the CapacityConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setMaxPercentage(self, value)
    Property target used to set the maxPercentage value.
    source code
     
    _getMaxPercentage(self)
    Property target used to get the maxPercentage value
    source code
     
    _setMinBytes(self, value)
    Property target used to set the bytes utilized value.
    source code
     
    _getMinBytes(self)
    Property target used to get the bytes remaining value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      maxPercentage
    Maximum percentage of the media that may be utilized.
      minBytes
    Minimum number of free bytes that must be available.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, maxPercentage=None, minBytes=None)
    (Constructor)

    source code 

    Constructor for the CapacityConfig class.

    Parameters:
    • maxPercentage - Maximum percentage of the media that may be utilized
    • minBytes - Minimum number of free bytes that must be available
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setMaxPercentage(self, value)

    source code 

    Property target used to set the maxPercentage value. If not None, the value must be a PercentageQuantity object.

    Raises:
    • ValueError - If the value is not a PercentageQuantity

    _setMinBytes(self, value)

    source code 

    Property target used to set the bytes utilized value. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    maxPercentage

    Maximum percentage of the media that may be utilized.

    Get Method:
    _getMaxPercentage(self) - Property target used to get the maxPercentage value
    Set Method:
    _setMaxPercentage(self, value) - Property target used to set the maxPercentage value.

    minBytes

    Minimum number of free bytes that must be available.

    Get Method:
    _getMinBytes(self) - Property target used to get the bytes remaining value.
    Set Method:
    _setMinBytes(self, value) - Property target used to set the bytes utilized value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.initialize-module.html0000664000175000017500000002304012143054362030447 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.initialize
    Package CedarBackup2 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Module initialize

    source code

    Implements the standard 'initialize' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeInitialize(configPath, options, config)
    Executes the initialize action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.initialize")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeInitialize(configPath, options, config)

    source code 

    Executes the initialize action.

    The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mysql-pysrc.html0000664000175000017500000071337612143054365027202 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql
    Package CedarBackup2 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.mysql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Revision : $Id: mysql.py 1022 2011-10-11 23:27:49Z pronovic $ 
     31  # Purpose  : Provides an extension to back up MySQL databases. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides an extension to back up MySQL databases. 
     41   
     42  This is a Cedar Backup extension used to back up MySQL databases via the Cedar 
     43  Backup command line.  It requires a new configuration section <mysql> and is 
     44  intended to be run either immediately before or immediately after the standard 
     45  collect action.  Aside from its own configuration, it requires the options and 
     46  collect configuration sections in the standard Cedar Backup configuration file. 
     47   
     48  The backup is done via the C{mysqldump} command included with the MySQL 
     49  product.  Output can be compressed using C{gzip} or C{bzip2}.  Administrators 
     50  can configure the extension either to back up all databases or to back up only 
     51  specific databases.  Note that this code always produces a full backup.  There 
     52  is currently no facility for making incremental backups.  If/when someone has a 
     53  need for this and can describe how to do it, I'll update this extension or 
     54  provide another. 
     55   
     56  The extension assumes that all configured databases can be backed up by a 
     57  single user.  Often, the "root" database user will be used.  An alternative is 
     58  to create a separate MySQL "backup" user and grant that user rights to read 
     59  (but not write) various databases as needed.  This second option is probably 
     60  the best choice. 
     61   
     62  The extension accepts a username and password in configuration.  However, you 
     63  probably do not want to provide those values in Cedar Backup configuration. 
     64  This is because Cedar Backup will provide these values to C{mysqldump} via the 
     65  command-line C{--user} and C{--password} switches, which will be visible to 
     66  other users in the process listing. 
     67   
     68  Instead, you should configure the username and password in one of MySQL's 
     69  configuration files.  Typically, that would be done by putting a stanza like 
     70  this in C{/root/.my.cnf}:: 
     71   
     72     [mysqldump] 
     73     user     = root 
     74     password = <secret> 
     75   
     76  Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback.conf} to store 
     77  database login and password information, you should be careful about who is 
     78  allowed to view that information.  Typically, this means locking down 
     79  permissions so that only the file owner can read the file contents (i.e. use 
     80  mode C{0600}). 
     81   
     82  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     83  """ 
     84   
     85  ######################################################################## 
     86  # Imported modules 
     87  ######################################################################## 
     88   
     89  # System modules 
     90  import os 
     91  import logging 
     92  from gzip import GzipFile 
     93  from bz2 import BZ2File 
     94   
     95  # Cedar Backup modules 
     96  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     97  from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     98  from CedarBackup2.config import VALID_COMPRESS_MODES 
     99  from CedarBackup2.util import resolveCommand, executeCommand 
    100  from CedarBackup2.util import ObjectTypeList, changeOwnership 
    101   
    102   
    103  ######################################################################## 
    104  # Module-wide constants and variables 
    105  ######################################################################## 
    106   
    107  logger = logging.getLogger("CedarBackup2.log.extend.mysql") 
    108  MYSQLDUMP_COMMAND = [ "mysqldump", ] 
    
    109 110 111 ######################################################################## 112 # MysqlConfig class definition 113 ######################################################################## 114 115 -class MysqlConfig(object):
    116 117 """ 118 Class representing MySQL configuration. 119 120 The MySQL configuration information is used for backing up MySQL databases. 121 122 The following restrictions exist on data in this class: 123 124 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 125 - The 'all' flag must be 'Y' if no databases are defined. 126 - The 'all' flag must be 'N' if any databases are defined. 127 - Any values in the databases list must be strings. 128 129 @sort: __init__, __repr__, __str__, __cmp__, user, password, all, databases 130 """ 131
    132 - def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    133 """ 134 Constructor for the C{MysqlConfig} class. 135 136 @param user: User to execute backup as. 137 @param password: Password associated with user. 138 @param compressMode: Compress mode for backed-up files. 139 @param all: Indicates whether to back up all databases. 140 @param databases: List of databases to back up. 141 """ 142 self._user = None 143 self._password = None 144 self._compressMode = None 145 self._all = None 146 self._databases = None 147 self.user = user 148 self.password = password 149 self.compressMode = compressMode 150 self.all = all 151 self.databases = databases
    152
    153 - def __repr__(self):
    154 """ 155 Official string representation for class instance. 156 """ 157 return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases)
    158
    159 - def __str__(self):
    160 """ 161 Informal string representation for class instance. 162 """ 163 return self.__repr__()
    164
    165 - def __cmp__(self, other):
    166 """ 167 Definition of equals operator for this class. 168 @param other: Other object to compare to. 169 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 170 """ 171 if other is None: 172 return 1 173 if self.user != other.user: 174 if self.user < other.user: 175 return -1 176 else: 177 return 1 178 if self.password != other.password: 179 if self.password < other.password: 180 return -1 181 else: 182 return 1 183 if self.compressMode != other.compressMode: 184 if self.compressMode < other.compressMode: 185 return -1 186 else: 187 return 1 188 if self.all != other.all: 189 if self.all < other.all: 190 return -1 191 else: 192 return 1 193 if self.databases != other.databases: 194 if self.databases < other.databases: 195 return -1 196 else: 197 return 1 198 return 0
    199
    200 - def _setUser(self, value):
    201 """ 202 Property target used to set the user value. 203 """ 204 if value is not None: 205 if len(value) < 1: 206 raise ValueError("User must be non-empty string.") 207 self._user = value
    208
    209 - def _getUser(self):
    210 """ 211 Property target used to get the user value. 212 """ 213 return self._user
    214
    215 - def _setPassword(self, value):
    216 """ 217 Property target used to set the password value. 218 """ 219 if value is not None: 220 if len(value) < 1: 221 raise ValueError("Password must be non-empty string.") 222 self._password = value
    223
    224 - def _getPassword(self):
    225 """ 226 Property target used to get the password value. 227 """ 228 return self._password
    229
    230 - def _setCompressMode(self, value):
    231 """ 232 Property target used to set the compress mode. 233 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 234 @raise ValueError: If the value is not valid. 235 """ 236 if value is not None: 237 if value not in VALID_COMPRESS_MODES: 238 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 239 self._compressMode = value
    240
    241 - def _getCompressMode(self):
    242 """ 243 Property target used to get the compress mode. 244 """ 245 return self._compressMode
    246
    247 - def _setAll(self, value):
    248 """ 249 Property target used to set the 'all' flag. 250 No validations, but we normalize the value to C{True} or C{False}. 251 """ 252 if value: 253 self._all = True 254 else: 255 self._all = False
    256
    257 - def _getAll(self):
    258 """ 259 Property target used to get the 'all' flag. 260 """ 261 return self._all
    262
    263 - def _setDatabases(self, value):
    264 """ 265 Property target used to set the databases list. 266 Either the value must be C{None} or each element must be a string. 267 @raise ValueError: If the value is not a string. 268 """ 269 if value is None: 270 self._databases = None 271 else: 272 for database in value: 273 if len(database) < 1: 274 raise ValueError("Each database must be a non-empty string.") 275 try: 276 saved = self._databases 277 self._databases = ObjectTypeList(basestring, "string") 278 self._databases.extend(value) 279 except Exception, e: 280 self._databases = saved 281 raise e
    282
    283 - def _getDatabases(self):
    284 """ 285 Property target used to get the databases list. 286 """ 287 return self._databases
    288 289 user = property(_getUser, _setUser, None, "User to execute backup as.") 290 password = property(_getPassword, _setPassword, None, "Password associated with user.") 291 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 292 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 293 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 294
    295 296 ######################################################################## 297 # LocalConfig class definition 298 ######################################################################## 299 300 -class LocalConfig(object):
    301 302 """ 303 Class representing this extension's configuration document. 304 305 This is not a general-purpose configuration object like the main Cedar 306 Backup configuration object. Instead, it just knows how to parse and emit 307 MySQL-specific configuration values. Third parties who need to read and 308 write configuration related to this extension should access it through the 309 constructor, C{validate} and C{addConfig} methods. 310 311 @note: Lists within this class are "unordered" for equality comparisons. 312 313 @sort: __init__, __repr__, __str__, __cmp__, mysql, validate, addConfig 314 """ 315
    316 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    317 """ 318 Initializes a configuration object. 319 320 If you initialize the object without passing either C{xmlData} or 321 C{xmlPath} then configuration will be empty and will be invalid until it 322 is filled in properly. 323 324 No reference to the original XML data or original path is saved off by 325 this class. Once the data has been parsed (successfully or not) this 326 original information is discarded. 327 328 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 329 method will be called (with its default arguments) against configuration 330 after successfully parsing any passed-in XML. Keep in mind that even if 331 C{validate} is C{False}, it might not be possible to parse the passed-in 332 XML document if lower-level validations fail. 333 334 @note: It is strongly suggested that the C{validate} option always be set 335 to C{True} (the default) unless there is a specific need to read in 336 invalid configuration from disk. 337 338 @param xmlData: XML data representing configuration. 339 @type xmlData: String data. 340 341 @param xmlPath: Path to an XML file on disk. 342 @type xmlPath: Absolute path to a file on disk. 343 344 @param validate: Validate the document after parsing it. 345 @type validate: Boolean true/false. 346 347 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 348 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 349 @raise ValueError: If the parsed configuration document is not valid. 350 """ 351 self._mysql = None 352 self.mysql = None 353 if xmlData is not None and xmlPath is not None: 354 raise ValueError("Use either xmlData or xmlPath, but not both.") 355 if xmlData is not None: 356 self._parseXmlData(xmlData) 357 if validate: 358 self.validate() 359 elif xmlPath is not None: 360 xmlData = open(xmlPath).read() 361 self._parseXmlData(xmlData) 362 if validate: 363 self.validate()
    364
    365 - def __repr__(self):
    366 """ 367 Official string representation for class instance. 368 """ 369 return "LocalConfig(%s)" % (self.mysql)
    370
    371 - def __str__(self):
    372 """ 373 Informal string representation for class instance. 374 """ 375 return self.__repr__()
    376
    377 - def __cmp__(self, other):
    378 """ 379 Definition of equals operator for this class. 380 Lists within this class are "unordered" for equality comparisons. 381 @param other: Other object to compare to. 382 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 383 """ 384 if other is None: 385 return 1 386 if self.mysql != other.mysql: 387 if self.mysql < other.mysql: 388 return -1 389 else: 390 return 1 391 return 0
    392
    393 - def _setMysql(self, value):
    394 """ 395 Property target used to set the mysql configuration value. 396 If not C{None}, the value must be a C{MysqlConfig} object. 397 @raise ValueError: If the value is not a C{MysqlConfig} 398 """ 399 if value is None: 400 self._mysql = None 401 else: 402 if not isinstance(value, MysqlConfig): 403 raise ValueError("Value must be a C{MysqlConfig} object.") 404 self._mysql = value
    405
    406 - def _getMysql(self):
    407 """ 408 Property target used to get the mysql configuration value. 409 """ 410 return self._mysql
    411 412 mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") 413
    414 - def validate(self):
    415 """ 416 Validates configuration represented by the object. 417 418 The compress mode must be filled in. Then, if the 'all' flag I{is} set, 419 no databases are allowed, and if the 'all' flag is I{not} set, at least 420 one database is required. 421 422 @raise ValueError: If one of the validations fails. 423 """ 424 if self.mysql is None: 425 raise ValueError("Mysql section is required.") 426 if self.mysql.compressMode is None: 427 raise ValueError("Compress mode value is required.") 428 if self.mysql.all: 429 if self.mysql.databases is not None and self.mysql.databases != []: 430 raise ValueError("Databases cannot be specified if 'all' flag is set.") 431 else: 432 if self.mysql.databases is None or len(self.mysql.databases) < 1: 433 raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.")
    434
    435 - def addConfig(self, xmlDom, parentNode):
    436 """ 437 Adds a <mysql> configuration section as the next child of a parent. 438 439 Third parties should use this function to write configuration related to 440 this extension. 441 442 We add the following fields to the document:: 443 444 user //cb_config/mysql/user 445 password //cb_config/mysql/password 446 compressMode //cb_config/mysql/compress_mode 447 all //cb_config/mysql/all 448 449 We also add groups of the following items, one list element per 450 item:: 451 452 database //cb_config/mysql/database 453 454 @param xmlDom: DOM tree as from C{impl.createDocument()}. 455 @param parentNode: Parent that the section should be appended to. 456 """ 457 if self.mysql is not None: 458 sectionNode = addContainerNode(xmlDom, parentNode, "mysql") 459 addStringNode(xmlDom, sectionNode, "user", self.mysql.user) 460 addStringNode(xmlDom, sectionNode, "password", self.mysql.password) 461 addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) 462 addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) 463 if self.mysql.databases is not None: 464 for database in self.mysql.databases: 465 addStringNode(xmlDom, sectionNode, "database", database)
    466
    467 - def _parseXmlData(self, xmlData):
    468 """ 469 Internal method to parse an XML string into the object. 470 471 This method parses the XML document into a DOM tree (C{xmlDom}) and then 472 calls a static method to parse the mysql configuration section. 473 474 @param xmlData: XML data to be parsed 475 @type xmlData: String data 476 477 @raise ValueError: If the XML cannot be successfully parsed. 478 """ 479 (xmlDom, parentNode) = createInputDom(xmlData) 480 self._mysql = LocalConfig._parseMysql(parentNode)
    481 482 @staticmethod
    483 - def _parseMysql(parentNode):
    484 """ 485 Parses a mysql configuration section. 486 487 We read the following fields:: 488 489 user //cb_config/mysql/user 490 password //cb_config/mysql/password 491 compressMode //cb_config/mysql/compress_mode 492 all //cb_config/mysql/all 493 494 We also read groups of the following item, one list element per 495 item:: 496 497 databases //cb_config/mysql/database 498 499 @param parentNode: Parent node to search beneath. 500 501 @return: C{MysqlConfig} object or C{None} if the section does not exist. 502 @raise ValueError: If some filled-in value is invalid. 503 """ 504 mysql = None 505 section = readFirstChild(parentNode, "mysql") 506 if section is not None: 507 mysql = MysqlConfig() 508 mysql.user = readString(section, "user") 509 mysql.password = readString(section, "password") 510 mysql.compressMode = readString(section, "compress_mode") 511 mysql.all = readBoolean(section, "all") 512 mysql.databases = readStringList(section, "database") 513 return mysql
    514
    515 516 ######################################################################## 517 # Public functions 518 ######################################################################## 519 520 ########################### 521 # executeAction() function 522 ########################### 523 524 -def executeAction(configPath, options, config):
    525 """ 526 Executes the MySQL backup action. 527 528 @param configPath: Path to configuration file on disk. 529 @type configPath: String representing a path on disk. 530 531 @param options: Program command-line options. 532 @type options: Options object. 533 534 @param config: Program configuration. 535 @type config: Config object. 536 537 @raise ValueError: Under many generic error conditions 538 @raise IOError: If a backup could not be written for some reason. 539 """ 540 logger.debug("Executing MySQL extended action.") 541 if config.options is None or config.collect is None: 542 raise ValueError("Cedar Backup configuration is not properly filled in.") 543 local = LocalConfig(xmlPath=configPath) 544 if local.mysql.all: 545 logger.info("Backing up all databases.") 546 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 547 config.options.backupUser, config.options.backupGroup, None) 548 else: 549 logger.debug("Backing up %d individual databases." % len(local.mysql.databases)) 550 for database in local.mysql.databases: 551 logger.info("Backing up database [%s]." % database) 552 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 553 config.options.backupUser, config.options.backupGroup, database) 554 logger.info("Executed the MySQL extended action successfully.")
    555
    556 -def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None):
    557 """ 558 Backs up an individual MySQL database, or all databases. 559 560 This internal method wraps the public method and adds some functionality, 561 like figuring out a filename, etc. 562 563 @param targetDir: Directory into which backups should be written. 564 @param compressMode: Compress mode to be used for backed-up files. 565 @param user: User to use for connecting to the database (if any). 566 @param password: Password associated with user (if any). 567 @param backupUser: User to own resulting file. 568 @param backupGroup: Group to own resulting file. 569 @param database: Name of database, or C{None} for all databases. 570 571 @return: Name of the generated backup file. 572 573 @raise ValueError: If some value is missing or invalid. 574 @raise IOError: If there is a problem executing the MySQL dump. 575 """ 576 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 577 try: 578 backupDatabase(user, password, outputFile, database) 579 finally: 580 outputFile.close() 581 if not os.path.exists(filename): 582 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 583 changeOwnership(filename, backupUser, backupGroup)
    584
    585 -def _getOutputFile(targetDir, database, compressMode):
    586 """ 587 Opens the output file used for saving the MySQL dump. 588 589 The filename is either C{"mysqldump.txt"} or C{"mysqldump-<database>.txt"}. The 590 C{".bz2"} extension is added if C{compress} is C{True}. 591 592 @param targetDir: Target directory to write file in. 593 @param database: Name of the database (if any) 594 @param compressMode: Compress mode to be used for backed-up files. 595 596 @return: Tuple of (Output file object, filename) 597 """ 598 if database is None: 599 filename = os.path.join(targetDir, "mysqldump.txt") 600 else: 601 filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) 602 if compressMode == "gzip": 603 filename = "%s.gz" % filename 604 outputFile = GzipFile(filename, "w") 605 elif compressMode == "bzip2": 606 filename = "%s.bz2" % filename 607 outputFile = BZ2File(filename, "w") 608 else: 609 outputFile = open(filename, "w") 610 logger.debug("MySQL dump file will be [%s]." % filename) 611 return (outputFile, filename)
    612
    613 614 ############################ 615 # backupDatabase() function 616 ############################ 617 618 -def backupDatabase(user, password, backupFile, database=None):
    619 """ 620 Backs up an individual MySQL database, or all databases. 621 622 This function backs up either a named local MySQL database or all local 623 MySQL databases, using the passed-in user and password (if provided) for 624 connectivity. This function call I{always} results a full backup. There is 625 no facility for incremental backups. 626 627 The backup data will be written into the passed-in backup file. Normally, 628 this would be an object as returned from C{open()}, but it is possible to 629 use something like a C{GzipFile} to write compressed output. The caller is 630 responsible for closing the passed-in backup file. 631 632 Often, the "root" database user will be used when backing up all databases. 633 An alternative is to create a separate MySQL "backup" user and grant that 634 user rights to read (but not write) all of the databases that will be backed 635 up. 636 637 This function accepts a username and password. However, you probably do not 638 want to pass those values in. This is because they will be provided to 639 C{mysqldump} via the command-line C{--user} and C{--password} switches, 640 which will be visible to other users in the process listing. 641 642 Instead, you should configure the username and password in one of MySQL's 643 configuration files. Typically, this would be done by putting a stanza like 644 this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database 645 username and its password:: 646 647 [mysqldump] 648 user = root 649 password = <secret> 650 651 If you are executing this function as some system user other than root, then 652 the C{.my.cnf} file would be placed in the home directory of that user. In 653 either case, make sure to set restrictive permissions (typically, mode 654 C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. 655 656 @param user: User to use for connecting to the database (if any) 657 @type user: String representing MySQL username, or C{None} 658 659 @param password: Password associated with user (if any) 660 @type password: String representing MySQL password, or C{None} 661 662 @param backupFile: File use for writing backup. 663 @type backupFile: Python file object as from C{open()} or C{file()}. 664 665 @param database: Name of the database to be backed up. 666 @type database: String representing database name, or C{None} for all databases. 667 668 @raise ValueError: If some value is missing or invalid. 669 @raise IOError: If there is a problem executing the MySQL dump. 670 """ 671 args = [ "-all", "--flush-logs", "--opt", ] 672 if user is not None: 673 logger.warn("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") 674 args.append("--user=%s" % user) 675 if password is not None: 676 logger.warn("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") 677 args.append("--password=%s" % password) 678 if database is None: 679 args.insert(0, "--all-databases") 680 else: 681 args.insert(0, "--databases") 682 args.append(database) 683 command = resolveCommand(MYSQLDUMP_COMMAND) 684 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 685 if result != 0: 686 if database is None: 687 raise IOError("Error [%d] executing MySQL database dump for all databases." % result) 688 else: 689 raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database))
    690

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter._ImageProperties-class.html0000664000175000017500000002171212143054363033252 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter._ImageProperties
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.util-module.html0000664000175000017500000000466712143054362030064 0ustar pronovicpronovic00000000000000 util

    Module util


    Functions

    buildMediaLabel
    checkMediaState
    createWriter
    findDailyDirs
    getBackupFiles
    initializeMediaState
    writeIndicatorFile

    Variables

    MEDIA_LABEL_PREFIX
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli.Options-class.html0000664000175000017500000026030612143054362026700 0ustar pronovicpronovic00000000000000 CedarBackup2.cli.Options
    Package CedarBackup2 :: Module cli :: Class Options
    [hide private]
    [frames] | no frames]

    Class Options

    source code

    object --+
             |
            Options
    
    Known Subclasses:

    Class representing command-line options for the cback script.

    The Options class is a Python object representation of the command-line options of the cback script.

    The object representation is two-way: a command line string or a list of command line arguments can be used to create an Options object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An Options object can even be created from scratch programmatically (if you have a need for that).

    There are two main levels of validation in the Options class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to fields if you are programmatically filling an object.

    The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Options.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Options object from a command line and before exporting a Options object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, argumentList=None, argumentString=None, validate=True)
    Initializes an options object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setHelp(self, value)
    Property target used to set the help flag.
    source code
     
    _getHelp(self)
    Property target used to get the help flag.
    source code
     
    _setVersion(self, value)
    Property target used to set the version flag.
    source code
     
    _getVersion(self)
    Property target used to get the version flag.
    source code
     
    _setVerbose(self, value)
    Property target used to set the verbose flag.
    source code
     
    _getVerbose(self)
    Property target used to get the verbose flag.
    source code
     
    _setQuiet(self, value)
    Property target used to set the quiet flag.
    source code
     
    _getQuiet(self)
    Property target used to get the quiet flag.
    source code
     
    _setConfig(self, value)
    Property target used to set the config parameter.
    source code
     
    _getConfig(self)
    Property target used to get the config parameter.
    source code
     
    _setFull(self, value)
    Property target used to set the full flag.
    source code
     
    _getFull(self)
    Property target used to get the full flag.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _setManagedOnly(self, value)
    Property target used to set the managedOnly flag.
    source code
     
    _getManagedOnly(self)
    Property target used to get the managedOnly flag.
    source code
     
    _setLogfile(self, value)
    Property target used to set the logfile parameter.
    source code
     
    _getLogfile(self)
    Property target used to get the logfile parameter.
    source code
     
    _setOwner(self, value)
    Property target used to set the owner parameter.
    source code
     
    _getOwner(self)
    Property target used to get the owner parameter.
    source code
     
    _setMode(self, value)
    Property target used to set the mode parameter.
    source code
     
    _getMode(self)
    Property target used to get the mode parameter.
    source code
     
    _setOutput(self, value)
    Property target used to set the output flag.
    source code
     
    _getOutput(self)
    Property target used to get the output flag.
    source code
     
    _setDebug(self, value)
    Property target used to set the debug flag.
    source code
     
    _getDebug(self)
    Property target used to get the debug flag.
    source code
     
    _setStacktrace(self, value)
    Property target used to set the stacktrace flag.
    source code
     
    _getStacktrace(self)
    Property target used to get the stacktrace flag.
    source code
     
    _setDiagnostics(self, value)
    Property target used to set the diagnostics flag.
    source code
     
    _getDiagnostics(self)
    Property target used to get the diagnostics flag.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _getActions(self)
    Property target used to get the actions list.
    source code
     
    validate(self)
    Validates command-line options represented by the object.
    source code
     
    buildArgumentList(self, validate=True)
    Extracts options into a list of command line arguments.
    source code
     
    buildArgumentString(self, validate=True)
    Extracts options into a string of command-line arguments.
    source code
     
    _parseArgumentList(self, argumentList)
    Internal method to parse a list of command-line arguments.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      help
    Command-line help (-h,--help) flag.
      version
    Command-line version (-V,--version) flag.
      verbose
    Command-line verbose (-b,--verbose) flag.
      quiet
    Command-line quiet (-q,--quiet) flag.
      config
    Command-line configuration file (-c,--config) parameter.
      full
    Command-line full-backup (-f,--full) flag.
      managed
    Command-line managed (-M,--managed) flag.
      managedOnly
    Command-line managed-only (-N,--managed-only) flag.
      logfile
    Command-line logfile (-l,--logfile) parameter.
      owner
    Command-line owner (-o,--owner) parameter, as tuple (user,group).
      mode
    Command-line mode (-m,--mode) parameter.
      output
    Command-line output (-O,--output) flag.
      debug
    Command-line debug (-d,--debug) flag.
      stacktrace
    Command-line stacktrace (-s,--stack) flag.
      diagnostics
    Command-line diagnostics (-D,--diagnostics) flag.
      actions
    Command-line actions list.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, argumentList=None, argumentString=None, validate=True)
    (Constructor)

    source code 

    Initializes an options object.

    If you initialize the object without passing either argumentList or argumentString, the object will be empty and will be invalid until it is filled in properly.

    No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    The argument list is assumed to be a list of arguments, not including the name of the command, something like sys.argv[1:]. If you pass sys.argv instead, things are not going to work.

    The argument string will be parsed into an argument list by the util.splitCommandLine function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to sys.argv[1:], just like argumentList.

    Unless the validate argument is False, the Options.validate method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if validate is False, it might not be possible to parse the passed-in command line, so an exception might still be raised.

    Parameters:
    • argumentList (List of arguments, i.e. sys.argv) - Command line for a program.
    • argumentString (String, i.e. "cback --verbose stage store") - Command line for a program.
    • validate (Boolean true/false.) - Validate the command line after parsing it.
    Raises:
    • getopt.GetoptError - If the command-line arguments could not be parsed.
    • ValueError - If the command-line arguments are invalid.
    Overrides: object.__init__
    Notes:
    • The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.
    • It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid command line arguments.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setHelp(self, value)

    source code 

    Property target used to set the help flag. No validations, but we normalize the value to True or False.

    _setVersion(self, value)

    source code 

    Property target used to set the version flag. No validations, but we normalize the value to True or False.

    _setVerbose(self, value)

    source code 

    Property target used to set the verbose flag. No validations, but we normalize the value to True or False.

    _setQuiet(self, value)

    source code 

    Property target used to set the quiet flag. No validations, but we normalize the value to True or False.

    _setFull(self, value)

    source code 

    Property target used to set the full flag. No validations, but we normalize the value to True or False.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedOnly(self, value)

    source code 

    Property target used to set the managedOnly flag. No validations, but we normalize the value to True or False.

    _setLogfile(self, value)

    source code 

    Property target used to set the logfile parameter.

    Raises:
    • ValueError - If the value cannot be encoded properly.

    _setOwner(self, value)

    source code 

    Property target used to set the owner parameter. If not None, the owner must be a (user,group) tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple.

    Raises:
    • ValueError - If the value is not valid.

    _getOwner(self)

    source code 

    Property target used to get the owner parameter. The parameter is a tuple of (user, group).

    _setOutput(self, value)

    source code 

    Property target used to set the output flag. No validations, but we normalize the value to True or False.

    _setDebug(self, value)

    source code 

    Property target used to set the debug flag. No validations, but we normalize the value to True or False.

    _setStacktrace(self, value)

    source code 

    Property target used to set the stacktrace flag. No validations, but we normalize the value to True or False.

    _setDiagnostics(self, value)

    source code 

    Property target used to set the diagnostics flag. No validations, but we normalize the value to True or False.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else.

    Raises:
    • ValueError - If the value is not valid.

    validate(self)

    source code 

    Validates command-line options represented by the object.

    Unless --help or --version are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality.

    Raises:
    • ValueError - If one of the validations fails.

    Note: The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.

    buildArgumentList(self, validate=True)

    source code 

    Extracts options into a list of command line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the argumentList parameter. Unlike buildArgumentString, string arguments are not quoted here, because there is no need for it.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    List representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    buildArgumentString(self, validate=True)

    source code 

    Extracts options into a string of command-line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes ("). The resulting string will be suitable for passing back to the constructor in the argumentString parameter.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    String representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    _parseArgumentList(self, argumentList)

    source code 

    Internal method to parse a list of command-line arguments.

    Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the validate method).

    For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. -l and a --logfile) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used.

    Parameters:
    • argumentList (List of arguments to a command, i.e. sys.argv[1:]) - List of arguments to a command.
    Raises:
    • ValueError - If the argument list cannot be successfully parsed.

    Property Details [hide private]

    help

    Command-line help (-h,--help) flag.

    Get Method:
    _getHelp(self) - Property target used to get the help flag.
    Set Method:
    _setHelp(self, value) - Property target used to set the help flag.

    version

    Command-line version (-V,--version) flag.

    Get Method:
    _getVersion(self) - Property target used to get the version flag.
    Set Method:
    _setVersion(self, value) - Property target used to set the version flag.

    verbose

    Command-line verbose (-b,--verbose) flag.

    Get Method:
    _getVerbose(self) - Property target used to get the verbose flag.
    Set Method:
    _setVerbose(self, value) - Property target used to set the verbose flag.

    quiet

    Command-line quiet (-q,--quiet) flag.

    Get Method:
    _getQuiet(self) - Property target used to get the quiet flag.
    Set Method:
    _setQuiet(self, value) - Property target used to set the quiet flag.

    config

    Command-line configuration file (-c,--config) parameter.

    Get Method:
    _getConfig(self) - Property target used to get the config parameter.
    Set Method:
    _setConfig(self, value) - Property target used to set the config parameter.

    full

    Command-line full-backup (-f,--full) flag.

    Get Method:
    _getFull(self) - Property target used to get the full flag.
    Set Method:
    _setFull(self, value) - Property target used to set the full flag.

    managed

    Command-line managed (-M,--managed) flag.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedOnly

    Command-line managed-only (-N,--managed-only) flag.

    Get Method:
    _getManagedOnly(self) - Property target used to get the managedOnly flag.
    Set Method:
    _setManagedOnly(self, value) - Property target used to set the managedOnly flag.

    logfile

    Command-line logfile (-l,--logfile) parameter.

    Get Method:
    _getLogfile(self) - Property target used to get the logfile parameter.
    Set Method:
    _setLogfile(self, value) - Property target used to set the logfile parameter.

    owner

    Command-line owner (-o,--owner) parameter, as tuple (user,group).

    Get Method:
    _getOwner(self) - Property target used to get the owner parameter.
    Set Method:
    _setOwner(self, value) - Property target used to set the owner parameter.

    mode

    Command-line mode (-m,--mode) parameter.

    Get Method:
    _getMode(self) - Property target used to get the mode parameter.
    Set Method:
    _setMode(self, value) - Property target used to set the mode parameter.

    output

    Command-line output (-O,--output) flag.

    Get Method:
    _getOutput(self) - Property target used to get the output flag.
    Set Method:
    _setOutput(self, value) - Property target used to set the output flag.

    debug

    Command-line debug (-d,--debug) flag.

    Get Method:
    _getDebug(self) - Property target used to get the debug flag.
    Set Method:
    _setDebug(self, value) - Property target used to set the debug flag.

    stacktrace

    Command-line stacktrace (-s,--stack) flag.

    Get Method:
    _getStacktrace(self) - Property target used to get the stacktrace flag.
    Set Method:
    _setStacktrace(self, value) - Property target used to set the stacktrace flag.

    diagnostics

    Command-line diagnostics (-D,--diagnostics) flag.

    Get Method:
    _getDiagnostics(self) - Property target used to get the diagnostics flag.
    Set Method:
    _setDiagnostics(self, value) - Property target used to set the diagnostics flag.

    actions

    Command-line actions list.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.RestrictedContentList-class.html0000664000175000017500000004427512143054363031760 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RestrictedContentList
    Package CedarBackup2 :: Module util :: Class RestrictedContentList
    [hide private]
    [frames] | no frames]

    Class RestrictedContentList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RestrictedContentList
    

    Class representing a list containing only object with certain values.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values.

    The valuesDescr value will be used in exceptions, i.e. "Item must be one of values in VALID_ACTIONS" if valuesDescr is "VALID_ACTIONS".


    Note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesList, valuesDescr, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesList, valuesDescr, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesList - List of valid values.
    • valuesDescr - Short string describing list of values.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.extend

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter.MediaCapacity-class.html0000664000175000017500000005125512143054363032516 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.MediaCapacity
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about CD media capacity.

    Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in.

    The boundaries value is either None (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by cdrecord -msinfo. It can be passed as-is to the IsoImage class.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable, boundaries)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target to get the bytes-available value.
    source code
     
    _getBoundaries(self)
    Property target to get the boundaries tuple.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      boundaries
    Session disc boundaries, in terms of ISO sectors.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable, boundaries)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • IndexError - If the boundaries tuple does not have enough elements.
    • ValueError - If the boundaries values are not integers.
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target to get the bytes-available value.

    boundaries

    Session disc boundaries, in terms of ISO sectors.

    Get Method:
    _getBoundaries(self) - Property target to get the boundaries tuple.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config-module.html0000664000175000017500000012422012143054362026116 0ustar pronovicpronovic00000000000000 CedarBackup2.config
    Package CedarBackup2 :: Module config
    [hide private]
    [frames] | no frames]

    Module config

    source code

    Provides configuration-related objects.

    Summary

    Cedar Backup stores all of its configuration in an XML document typically called cback.conf. The standard location for this document is in /etc, but users can specify a different location if they want to.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    The Config class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files.

    Backwards Compatibility

    The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software.

    XML Configuration Structure

    A Config object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input must result in a Config object which passes the validations laid out below in the Validation section.

    An XML configuration file is composed of seven sections:

    • reference: specifies reference information about the file (author, revision, etc)
    • extensions: specifies mappings to Cedar Backup extensions (external code)
    • options: specifies global configuration options
    • peers: specifies the set of peers in a master's backup pool
    • collect: specifies configuration related to the collect action
    • stage: specifies configuration related to the stage action
    • store: specifies configuration related to the store action
    • purge: specifies configuration related to the purge action

    Each section is represented by an class in this module, and then the overall Config class is a composition of the various other classes.

    Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to None in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the Validation section below).

    Unicode vs. String Data

    By default, all string data that comes out of XML documents in Python is unicode data (i.e. u"whatever"). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using util.encodePath. The main exception is the various absoluteExcludePath and relativeExcludePath lists. These are not converted, because they are generally only used for filtering, not for filesystem operations.

    Validation

    There are two main levels of validation in the Config class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to configuration class fields.

    The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Config.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Config object from XML data and before exporting a Config object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files.

    The Config.validate implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults.

    The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to None is validated according to the rules for that section (see below).

    Reference Validations

    No validations.

    Extensions Validations

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured.

    Options Validations

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Peers Validations

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Collect Validations

    The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Stage Validations

    The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either None or an empty list [] if desired.

    If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply.

    Store Validations

    The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK).

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None. Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    Purge Validations

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      ActionDependencies
    Class representing dependencies associated with an extended action.
      ActionHook
    Class representing a hook associated with an action.
      PreActionHook
    Class representing a pre-action hook associated with an action.
      PostActionHook
    Class representing a pre-action hook associated with an action.
      ExtendedAction
    Class representing an extended action.
      CommandOverride
    Class representing a piece of Cedar Backup command override configuration.
      CollectFile
    Class representing a Cedar Backup collect file.
      CollectDir
    Class representing a Cedar Backup collect directory.
      PurgeDir
    Class representing a Cedar Backup purge directory.
      LocalPeer
    Class representing a Cedar Backup peer.
      RemotePeer
    Class representing a Cedar Backup peer.
      ReferenceConfig
    Class representing a Cedar Backup reference configuration.
      ExtensionsConfig
    Class representing Cedar Backup extensions configuration.
      OptionsConfig
    Class representing a Cedar Backup global options configuration.
      PeersConfig
    Class representing Cedar Backup global peer configuration.
      CollectConfig
    Class representing a Cedar Backup collect configuration.
      StageConfig
    Class representing a Cedar Backup stage configuration.
      StoreConfig
    Class representing a Cedar Backup store configuration.
      PurgeConfig
    Class representing a Cedar Backup purge configuration.
      Config
    Class representing a Cedar Backup XML configuration document.
      ByteQuantity
    Class representing a byte quantity.
      BlankBehavior
    Class representing optimized store-action media blanking behavior.
    Functions [hide private]
     
    readByteQuantity(parent, name)
    Read a byte size value from an XML document.
    source code
     
    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)
    Adds a text node as the next child of a parent, to contain a byte size.
    source code
    Variables [hide private]
      DEFAULT_DEVICE_TYPE = 'cdwriter'
    The default device type.
      DEFAULT_MEDIA_TYPE = 'cdrw-74'
    The default media type.
      VALID_DEVICE_TYPES = ['cdwriter', 'dvdwriter']
    List of valid device types.
      VALID_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80',...
    List of valid media types.
      VALID_COLLECT_MODES = ['daily', 'weekly', 'incr']
    List of valid collect modes.
      VALID_ARCHIVE_MODES = ['tar', 'targz', 'tarbz2']
    List of valid archive modes.
      VALID_ORDER_MODES = ['index', 'dependency']
    List of valid extension order modes.
      logger = logging.getLogger("CedarBackup2.log.config")
      VALID_CD_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80']
      VALID_DVD_MEDIA_TYPES = ['dvd+r', 'dvd+rw']
      VALID_COMPRESS_MODES = ['none', 'gzip', 'bzip2']
    List of valid compress modes.
      VALID_BLANK_MODES = ['daily', 'weekly']
      VALID_BYTE_UNITS = [0, 1, 2, 4]
      VALID_FAILURE_MODES = ['none', 'all', 'daily', 'weekly']
      REWRITABLE_MEDIA_TYPES = ['cdrw-74', 'cdrw-80', 'dvd+rw']
      ACTION_NAME_REGEX = '^[a-z0-9]*$'
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    readByteQuantity(parent, name)

    source code 

    Read a byte size value from an XML document.

    A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    ByteQuantity parsed from XML document

    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)

    source code 

    Adds a text node as the next child of a parent, to contain a byte size.

    If the byteQuantity is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413").

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • byteQuantity - ByteQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Variables Details [hide private]

    VALID_MEDIA_TYPES

    List of valid media types.
    Value:
    ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80', 'dvd+r', 'dvd+rw']
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter-module.html0000664000175000017500000002524612143054362030202 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter
    Package CedarBackup2 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Module cdwriter

    source code

    Provides functionality related to CD writer devices.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MediaDefinition
    Class encapsulating information about CD media definitions.
      MediaCapacity
    Class encapsulating information about CD media capacity.
      CdWriter
    Class representing a device that knows how to write CD media.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_CDRW_74 = 1
    Constant representing 74-minute CD-RW media.
      MEDIA_CDR_74 = 2
    Constant representing 74-minute CD-R media.
      MEDIA_CDRW_80 = 3
    Constant representing 80-minute CD-RW media.
      MEDIA_CDR_80 = 4
    Constant representing 80-minute CD-R media.
      logger = logging.getLogger("CedarBackup2.log.writers.cdwriter")
      CDRECORD_COMMAND = ['cdrecord']
      EJECT_COMMAND = ['eject']
      MKISOFS_COMMAND = ['mkisofs']
      __package__ = 'CedarBackup2.writers'
    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.capacity-module.html0000664000175000017500000000333312143054362030520 0ustar pronovicpronovic00000000000000 capacity

    Module capacity


    Classes

    CapacityConfig
    LocalConfig
    PercentageQuantity

    Functions

    executeAction

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.release-module.html0000664000175000017500000000320712143054362027055 0ustar pronovicpronovic00000000000000 release

    Module release


    Variables

    AUTHOR
    COPYRIGHT
    DATE
    EMAIL
    URL
    VERSION
    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.knapsack-module.html0000664000175000017500000005336512143054362026457 0ustar pronovicpronovic00000000000000 CedarBackup2.knapsack
    Package CedarBackup2 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Module knapsack

    source code

    Provides the implementation for various knapsack algorithms.

    Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files.

    All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently.

    Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    firstFit(items, capacity)
    Implements the first-fit knapsack algorithm.
    source code
     
    bestFit(items, capacity)
    Implements the best-fit knapsack algorithm.
    source code
     
    worstFit(items, capacity)
    Implements the worst-fit knapsack algorithm.
    source code
     
    alternateFit(items, capacity)
    Implements the alternate-fit knapsack algorithm.
    source code
    Variables [hide private]
      __package__ = None
    hash(x)
    Function Details [hide private]

    firstFit(items, capacity)

    source code 

    Implements the first-fit knapsack algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    bestFit(items, capacity)

    source code 

    Implements the best-fit knapsack algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    worstFit(items, capacity)

    source code 

    Implements the worst-fit knapsack algorithm.

    The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    alternateFit(items, capacity)

    source code 

    Implements the alternate-fit knapsack algorithm.

    This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.xmlutil-pysrc.html0000664000175000017500000056051612143054365026241 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil
    Package CedarBackup2 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.xmlutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # Portions Copyright (c) 2000 Fourthought Inc, USA. 
     15  # All Rights Reserved. 
     16  # 
     17  # This program is free software; you can redistribute it and/or 
     18  # modify it under the terms of the GNU General Public License, 
     19  # Version 2, as published by the Free Software Foundation. 
     20  # 
     21  # This program is distributed in the hope that it will be useful, 
     22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     24  # 
     25  # Copies of the GNU General Public License are available from 
     26  # the Free Software Foundation website, http://www.gnu.org/. 
     27  # 
     28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     29  # 
     30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     31  # Language : Python (>= 2.5) 
     32  # Project  : Cedar Backup, release 2 
     33  # Revision : $Id: xmlutil.py 1022 2011-10-11 23:27:49Z pronovic $ 
     34  # Purpose  : Provides general XML-related functionality. 
     35  # 
     36  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     37   
     38  ######################################################################## 
     39  # Module documentation 
     40  ######################################################################## 
     41   
     42  """ 
     43  Provides general XML-related functionality. 
     44   
     45  What I'm trying to do here is abstract much of the functionality that directly 
     46  accesses the DOM tree.  This is not so much to "protect" the other code from 
     47  the DOM, but to standardize the way it's used.  It will also help extension 
     48  authors write code that easily looks more like the rest of Cedar Backup. 
     49   
     50  @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren,  
     51         readFirstChild, readStringList, readString, readInteger, readBoolean, 
     52         addContainerNode, addStringNode, addIntegerNode, addBooleanNode, 
     53         TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES 
     54   
     55  @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. 
     56  @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. 
     57  @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. 
     58   
     59  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     60  """ 
     61  # pylint: disable=C0111,C0103,W0511,W0104 
     62   
     63  ######################################################################## 
     64  # Imported modules 
     65  ######################################################################## 
     66   
     67  # System modules 
     68  import sys 
     69  import re 
     70  import logging 
     71  import codecs 
     72  from types import UnicodeType 
     73  from StringIO import StringIO 
     74   
     75  # XML-related modules 
     76  from xml.parsers.expat import ExpatError 
     77  from xml.dom.minidom import Node 
     78  from xml.dom.minidom import getDOMImplementation 
     79  from xml.dom.minidom import parseString 
     80   
     81   
     82  ######################################################################## 
     83  # Module-wide constants and variables 
     84  ######################################################################## 
     85   
     86  logger = logging.getLogger("CedarBackup2.log.xml") 
     87   
     88  TRUE_BOOLEAN_VALUES   = [ "Y", "y", ] 
     89  FALSE_BOOLEAN_VALUES  = [ "N", "n", ] 
     90  VALID_BOOLEAN_VALUES  = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES 
     91   
     92   
     93  ######################################################################## 
     94  # Functions for creating and parsing DOM trees 
     95  ######################################################################## 
     96   
    
    97 -def createInputDom(xmlData, name="cb_config"):
    98 """ 99 Creates a DOM tree based on reading an XML string. 100 @param name: Assumed base name of the document (root node name). 101 @return: Tuple (xmlDom, parentNode) for the parsed document 102 @raise ValueError: If the document can't be parsed. 103 """ 104 try: 105 xmlDom = parseString(xmlData) 106 parentNode = readFirstChild(xmlDom, name) 107 return (xmlDom, parentNode) 108 except (IOError, ExpatError), e: 109 raise ValueError("Unable to parse XML document: %s" % e)
    110
    111 -def createOutputDom(name="cb_config"):
    112 """ 113 Creates a DOM tree used for writing an XML document. 114 @param name: Base name of the document (root node name). 115 @return: Tuple (xmlDom, parentNode) for the new document 116 """ 117 impl = getDOMImplementation() 118 xmlDom = impl.createDocument(None, name, None) 119 return (xmlDom, xmlDom.documentElement)
    120 121 122 ######################################################################## 123 # Functions for reading values out of XML documents 124 ######################################################################## 125
    126 -def isElement(node):
    127 """ 128 Returns True or False depending on whether the XML node is an element node. 129 """ 130 return node.nodeType == Node.ELEMENT_NODE
    131
    132 -def readChildren(parent, name):
    133 """ 134 Returns a list of nodes with a given name immediately beneath the 135 parent. 136 137 By "immediately beneath" the parent, we mean from among nodes that are 138 direct children of the passed-in parent node. 139 140 Underneath, we use the Python C{getElementsByTagName} method, which is 141 pretty cool, but which (surprisingly?) returns a list of all children 142 with a given name below the parent, at any level. We just prune that 143 list to include only children whose C{parentNode} matches the passed-in 144 parent. 145 146 @param parent: Parent node to search beneath. 147 @param name: Name of nodes to search for. 148 149 @return: List of child nodes with correct parent, or an empty list if 150 no matching nodes are found. 151 """ 152 lst = [] 153 if parent is not None: 154 result = parent.getElementsByTagName(name) 155 for entry in result: 156 if entry.parentNode is parent: 157 lst.append(entry) 158 return lst
    159
    160 -def readFirstChild(parent, name):
    161 """ 162 Returns the first child with a given name immediately beneath the parent. 163 164 By "immediately beneath" the parent, we mean from among nodes that are 165 direct children of the passed-in parent node. 166 167 @param parent: Parent node to search beneath. 168 @param name: Name of node to search for. 169 170 @return: First properly-named child of parent, or C{None} if no matching nodes are found. 171 """ 172 result = readChildren(parent, name) 173 if result is None or result == []: 174 return None 175 return result[0]
    176
    177 -def readStringList(parent, name):
    178 """ 179 Returns a list of the string contents associated with nodes with a given 180 name immediately beneath the parent. 181 182 By "immediately beneath" the parent, we mean from among nodes that are 183 direct children of the passed-in parent node. 184 185 First, we find all of the nodes using L{readChildren}, and then we 186 retrieve the "string contents" of each of those nodes. The returned list 187 has one entry per matching node. We assume that string contents of a 188 given node belong to the first C{TEXT_NODE} child of that node. Nodes 189 which have no C{TEXT_NODE} children are not represented in the returned 190 list. 191 192 @param parent: Parent node to search beneath. 193 @param name: Name of node to search for. 194 195 @return: List of strings as described above, or C{None} if no matching nodes are found. 196 """ 197 lst = [] 198 result = readChildren(parent, name) 199 for entry in result: 200 if entry.hasChildNodes(): 201 for child in entry.childNodes: 202 if child.nodeType == Node.TEXT_NODE: 203 lst.append(child.nodeValue) 204 break 205 if lst == []: 206 lst = None 207 return lst
    208
    209 -def readString(parent, name):
    210 """ 211 Returns string contents of the first child with a given name immediately 212 beneath the parent. 213 214 By "immediately beneath" the parent, we mean from among nodes that are 215 direct children of the passed-in parent node. We assume that string 216 contents of a given node belong to the first C{TEXT_NODE} child of that 217 node. 218 219 @param parent: Parent node to search beneath. 220 @param name: Name of node to search for. 221 222 @return: String contents of node or C{None} if no matching nodes are found. 223 """ 224 result = readStringList(parent, name) 225 if result is None: 226 return None 227 return result[0]
    228
    229 -def readInteger(parent, name):
    230 """ 231 Returns integer contents of the first child with a given name immediately 232 beneath the parent. 233 234 By "immediately beneath" the parent, we mean from among nodes that are 235 direct children of the passed-in parent node. 236 237 @param parent: Parent node to search beneath. 238 @param name: Name of node to search for. 239 240 @return: Integer contents of node or C{None} if no matching nodes are found. 241 @raise ValueError: If the string at the location can't be converted to an integer. 242 """ 243 result = readString(parent, name) 244 if result is None: 245 return None 246 else: 247 return int(result)
    248
    249 -def readFloat(parent, name):
    250 """ 251 Returns float contents of the first child with a given name immediately 252 beneath the parent. 253 254 By "immediately beneath" the parent, we mean from among nodes that are 255 direct children of the passed-in parent node. 256 257 @param parent: Parent node to search beneath. 258 @param name: Name of node to search for. 259 260 @return: Float contents of node or C{None} if no matching nodes are found. 261 @raise ValueError: If the string at the location can't be converted to a 262 float value. 263 """ 264 result = readString(parent, name) 265 if result is None: 266 return None 267 else: 268 return float(result)
    269
    270 -def readBoolean(parent, name):
    271 """ 272 Returns boolean contents of the first child with a given name immediately 273 beneath the parent. 274 275 By "immediately beneath" the parent, we mean from among nodes that are 276 direct children of the passed-in parent node. 277 278 The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. 279 280 @param parent: Parent node to search beneath. 281 @param name: Name of node to search for. 282 283 @return: Boolean contents of node or C{None} if no matching nodes are found. 284 @raise ValueError: If the string at the location can't be converted to a boolean. 285 """ 286 result = readString(parent, name) 287 if result is None: 288 return None 289 else: 290 if result in TRUE_BOOLEAN_VALUES: 291 return True 292 elif result in FALSE_BOOLEAN_VALUES: 293 return False 294 else: 295 raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES)
    296 297 298 ######################################################################## 299 # Functions for writing values into XML documents 300 ######################################################################## 301
    302 -def addContainerNode(xmlDom, parentNode, nodeName):
    303 """ 304 Adds a container node as the next child of a parent node. 305 306 @param xmlDom: DOM tree as from C{impl.createDocument()}. 307 @param parentNode: Parent node to create child for. 308 @param nodeName: Name of the new container node. 309 310 @return: Reference to the newly-created node. 311 """ 312 containerNode = xmlDom.createElement(nodeName) 313 parentNode.appendChild(containerNode) 314 return containerNode
    315
    316 -def addStringNode(xmlDom, parentNode, nodeName, nodeValue):
    317 """ 318 Adds a text node as the next child of a parent, to contain a string. 319 320 If the C{nodeValue} is None, then the node will be created, but will be 321 empty (i.e. will contain no text node child). 322 323 @param xmlDom: DOM tree as from C{impl.createDocument()}. 324 @param parentNode: Parent node to create child for. 325 @param nodeName: Name of the new container node. 326 @param nodeValue: The value to put into the node. 327 328 @return: Reference to the newly-created node. 329 """ 330 containerNode = addContainerNode(xmlDom, parentNode, nodeName) 331 if nodeValue is not None: 332 textNode = xmlDom.createTextNode(nodeValue) 333 containerNode.appendChild(textNode) 334 return containerNode
    335
    336 -def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue):
    337 """ 338 Adds a text node as the next child of a parent, to contain an integer. 339 340 If the C{nodeValue} is None, then the node will be created, but will be 341 empty (i.e. will contain no text node child). 342 343 The integer will be converted to a string using "%d". The result will be 344 added to the document via L{addStringNode}. 345 346 @param xmlDom: DOM tree as from C{impl.createDocument()}. 347 @param parentNode: Parent node to create child for. 348 @param nodeName: Name of the new container node. 349 @param nodeValue: The value to put into the node. 350 351 @return: Reference to the newly-created node. 352 """ 353 if nodeValue is None: 354 return addStringNode(xmlDom, parentNode, nodeName, None) 355 else: 356 return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue)
    357
    358 -def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue):
    359 """ 360 Adds a text node as the next child of a parent, to contain a boolean. 361 362 If the C{nodeValue} is None, then the node will be created, but will be 363 empty (i.e. will contain no text node child). 364 365 Boolean C{True}, or anything else interpreted as C{True} by Python, will 366 be converted to a string "Y". Anything else will be converted to a 367 string "N". The result is added to the document via L{addStringNode}. 368 369 @param xmlDom: DOM tree as from C{impl.createDocument()}. 370 @param parentNode: Parent node to create child for. 371 @param nodeName: Name of the new container node. 372 @param nodeValue: The value to put into the node. 373 374 @return: Reference to the newly-created node. 375 """ 376 if nodeValue is None: 377 return addStringNode(xmlDom, parentNode, nodeName, None) 378 else: 379 if nodeValue: 380 return addStringNode(xmlDom, parentNode, nodeName, "Y") 381 else: 382 return addStringNode(xmlDom, parentNode, nodeName, "N")
    383 384 385 ######################################################################## 386 # Functions for serializing DOM trees 387 ######################################################################## 388
    389 -def serializeDom(xmlDom, indent=3):
    390 """ 391 Serializes a DOM tree and returns the result in a string. 392 @param xmlDom: XML DOM tree to serialize 393 @param indent: Number of spaces to indent, as an integer 394 @return: String form of DOM tree, pretty-printed. 395 """ 396 xmlBuffer = StringIO() 397 serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) 398 serializer.serialize(xmlDom) 399 xmlData = xmlBuffer.getvalue() 400 xmlBuffer.close() 401 return xmlData
    402
    403 -class Serializer(object):
    404 405 """ 406 XML serializer class. 407 408 This is a customized serializer that I hacked together based on what I found 409 in the PyXML distribution. Basically, around release 2.7.0, the only reason 410 I still had around a dependency on PyXML was for the PrettyPrint 411 functionality, and that seemed pointless. So, I stripped the PrettyPrint 412 code out of PyXML and hacked bits of it off until it did just what I needed 413 and no more. 414 415 This code started out being called PrintVisitor, but I decided it makes more 416 sense just calling it a serializer. I've made nearly all of the methods 417 private, and I've added a new high-level serialize() method rather than 418 having clients call C{visit()}. 419 420 Anyway, as a consequence of my hacking with it, this can't quite be called a 421 complete XML serializer any more. I ripped out support for HTML and XHTML, 422 and there is also no longer any support for namespaces (which I took out 423 because this dragged along a lot of extra code, and Cedar Backup doesn't use 424 namespaces). However, everything else should pretty much work as expected. 425 426 @copyright: This code, prior to customization, was part of the PyXML 427 codebase, and before that was part of the 4DOM suite developed by 428 Fourthought, Inc. It its original form, it was Copyright (c) 2000 429 Fourthought Inc, USA; All Rights Reserved. 430 """ 431
    432 - def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3):
    433 """ 434 Initialize a serializer. 435 @param stream: Stream to write output to. 436 @param encoding: Output encoding. 437 @param indent: Number of spaces to indent, as an integer 438 """ 439 self.stream = stream 440 self.encoding = encoding 441 self._indent = indent * " " 442 self._depth = 0 443 self._inText = 0
    444
    445 - def serialize(self, xmlDom):
    446 """ 447 Serialize the passed-in XML document. 448 @param xmlDom: XML DOM tree to serialize 449 @raise ValueError: If there's an unknown node type in the document. 450 """ 451 self._visit(xmlDom) 452 self.stream.write("\n")
    453
    454 - def _write(self, text):
    455 obj = _encodeText(text, self.encoding) 456 self.stream.write(obj) 457 return
    458
    459 - def _tryIndent(self):
    460 if not self._inText and self._indent: 461 self._write('\n' + self._indent*self._depth) 462 return
    463
    464 - def _visit(self, node):
    465 """ 466 @raise ValueError: If there's an unknown node type in the document. 467 """ 468 if node.nodeType == Node.ELEMENT_NODE: 469 return self._visitElement(node) 470 471 elif node.nodeType == Node.ATTRIBUTE_NODE: 472 return self._visitAttr(node) 473 474 elif node.nodeType == Node.TEXT_NODE: 475 return self._visitText(node) 476 477 elif node.nodeType == Node.CDATA_SECTION_NODE: 478 return self._visitCDATASection(node) 479 480 elif node.nodeType == Node.ENTITY_REFERENCE_NODE: 481 return self._visitEntityReference(node) 482 483 elif node.nodeType == Node.ENTITY_NODE: 484 return self._visitEntity(node) 485 486 elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: 487 return self._visitProcessingInstruction(node) 488 489 elif node.nodeType == Node.COMMENT_NODE: 490 return self._visitComment(node) 491 492 elif node.nodeType == Node.DOCUMENT_NODE: 493 return self._visitDocument(node) 494 495 elif node.nodeType == Node.DOCUMENT_TYPE_NODE: 496 return self._visitDocumentType(node) 497 498 elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: 499 return self._visitDocumentFragment(node) 500 501 elif node.nodeType == Node.NOTATION_NODE: 502 return self._visitNotation(node) 503 504 # It has a node type, but we don't know how to handle it 505 raise ValueError("Unknown node type: %s" % repr(node))
    506
    507 - def _visitNodeList(self, node, exclude=None):
    508 for curr in node: 509 curr is not exclude and self._visit(curr) 510 return
    511
    512 - def _visitNamedNodeMap(self, node):
    513 for item in node.values(): 514 self._visit(item) 515 return
    516
    517 - def _visitAttr(self, node):
    518 self._write(' ' + node.name) 519 value = node.value 520 text = _translateCDATA(value, self.encoding) 521 text, delimiter = _translateCDATAAttr(text) 522 self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) 523 return
    524
    525 - def _visitProlog(self):
    526 self._write("<?xml version='1.0' encoding='%s'?>" % (self.encoding or 'utf-8')) 527 self._inText = 0 528 return
    529
    530 - def _visitDocument(self, node):
    531 self._visitProlog() 532 node.doctype and self._visitDocumentType(node.doctype) 533 self._visitNodeList(node.childNodes, exclude=node.doctype) 534 return
    535
    536 - def _visitDocumentFragment(self, node):
    537 self._visitNodeList(node.childNodes) 538 return
    539
    540 - def _visitElement(self, node):
    541 self._tryIndent() 542 self._write('<%s' % node.tagName) 543 for attr in node.attributes.values(): 544 self._visitAttr(attr) 545 if len(node.childNodes): 546 self._write('>') 547 self._depth = self._depth + 1 548 self._visitNodeList(node.childNodes) 549 self._depth = self._depth - 1 550 not (self._inText) and self._tryIndent() 551 self._write('</%s>' % node.tagName) 552 else: 553 self._write('/>') 554 self._inText = 0 555 return
    556
    557 - def _visitText(self, node):
    558 text = node.data 559 if self._indent: 560 text.strip() 561 if text: 562 text = _translateCDATA(text, self.encoding) 563 self.stream.write(text) 564 self._inText = 1 565 return
    566
    567 - def _visitDocumentType(self, doctype):
    568 if not doctype.systemId and not doctype.publicId: return 569 self._tryIndent() 570 self._write('<!DOCTYPE %s' % doctype.name) 571 if doctype.systemId and '"' in doctype.systemId: 572 system = "'%s'" % doctype.systemId 573 else: 574 system = '"%s"' % doctype.systemId 575 if doctype.publicId and '"' in doctype.publicId: 576 # We should probably throw an error 577 # Valid characters: <space> | <newline> | <linefeed> | 578 # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] 579 public = "'%s'" % doctype.publicId 580 else: 581 public = '"%s"' % doctype.publicId 582 if doctype.publicId and doctype.systemId: 583 self._write(' PUBLIC %s %s' % (public, system)) 584 elif doctype.systemId: 585 self._write(' SYSTEM %s' % system) 586 if doctype.entities or doctype.notations: 587 self._write(' [') 588 self._depth = self._depth + 1 589 self._visitNamedNodeMap(doctype.entities) 590 self._visitNamedNodeMap(doctype.notations) 591 self._depth = self._depth - 1 592 self._tryIndent() 593 self._write(']>') 594 else: 595 self._write('>') 596 self._inText = 0 597 return
    598
    599 - def _visitEntity(self, node):
    600 """Visited from a NamedNodeMap in DocumentType""" 601 self._tryIndent() 602 self._write('<!ENTITY %s' % (node.nodeName)) 603 node.publicId and self._write(' PUBLIC %s' % node.publicId) 604 node.systemId and self._write(' SYSTEM %s' % node.systemId) 605 node.notationName and self._write(' NDATA %s' % node.notationName) 606 self._write('>') 607 return
    608
    609 - def _visitNotation(self, node):
    610 """Visited from a NamedNodeMap in DocumentType""" 611 self._tryIndent() 612 self._write('<!NOTATION %s' % node.nodeName) 613 node.publicId and self._write(' PUBLIC %s' % node.publicId) 614 node.systemId and self._write(' SYSTEM %s' % node.systemId) 615 self._write('>') 616 return
    617
    618 - def _visitCDATASection(self, node):
    619 self._tryIndent() 620 self._write('<![CDATA[%s]]>' % (node.data)) 621 self._inText = 0 622 return
    623
    624 - def _visitComment(self, node):
    625 self._tryIndent() 626 self._write('<!--%s-->' % (node.data)) 627 self._inText = 0 628 return
    629
    630 - def _visitEntityReference(self, node):
    631 self._write('&%s;' % node.nodeName) 632 self._inText = 1 633 return
    634
    635 - def _visitProcessingInstruction(self, node):
    636 self._tryIndent() 637 self._write('<?%s %s?>' % (node.target, node.data)) 638 self._inText = 0 639 return
    640
    641 -def _encodeText(text, encoding):
    642 """ 643 @copyright: This code, prior to customization, was part of the PyXML 644 codebase, and before that was part of the 4DOM suite developed by 645 Fourthought, Inc. It its original form, it was attributed to Martin v. 646 Löwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. 647 """ 648 encoder = codecs.lookup(encoding)[0] # encode,decode,reader,writer 649 if type(text) is not UnicodeType: 650 text = unicode(text, "utf-8") 651 return encoder(text)[0] # result,size
    652
    653 -def _translateCDATAAttr(characters):
    654 """ 655 Handles normalization and some intelligence about quoting. 656 657 @copyright: This code, prior to customization, was part of the PyXML 658 codebase, and before that was part of the 4DOM suite developed by 659 Fourthought, Inc. It its original form, it was Copyright (c) 2000 660 Fourthought Inc, USA; All Rights Reserved. 661 """ 662 if not characters: 663 return '', "'" 664 if "'" in characters: 665 delimiter = '"' 666 new_chars = re.sub('"', '&quot;', characters) 667 else: 668 delimiter = "'" 669 new_chars = re.sub("'", '&apos;', characters) 670 #FIXME: There's more to normalization 671 #Convert attribute new-lines to character entity 672 # characters is possibly shorter than new_chars (no entities) 673 if "\n" in characters: 674 new_chars = re.sub('\n', '&#10;', new_chars) 675 return new_chars, delimiter
    676 677 #Note: Unicode object only for now
    678 -def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0):
    679 """ 680 @copyright: This code, prior to customization, was part of the PyXML 681 codebase, and before that was part of the 4DOM suite developed by 682 Fourthought, Inc. It its original form, it was Copyright (c) 2000 683 Fourthought Inc, USA; All Rights Reserved. 684 """ 685 CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') 686 CHAR_TO_ENTITY = { '&': '&amp;', '<': '&lt;', ']]>': ']]&gt;', } 687 ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' 688 ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' 689 XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) 690 if not characters: 691 return '' 692 if not markupSafe: 693 if CDATA_CHAR_PATTERN.search(characters): 694 new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] 695 else: 696 new_string = characters 697 if prev_chars[-2:] == ']]' and characters[0] == '>': 698 new_string = '&gt;' + new_string[1:] 699 else: 700 new_string = characters 701 #Note: use decimal char entity rep because some browsers are broken 702 #FIXME: This will bomb for high characters. Should, for instance, detect 703 #The UTF-8 for 0xFFFE and put out &#xFFFE; 704 if XML_ILLEGAL_CHAR_PATTERN.search(new_string): 705 new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] 706 new_string = _encodeText(new_string, encoding) 707 return new_string
    708

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.testutil-pysrc.html0000664000175000017500000040241312143054364026406 0ustar pronovicpronovic00000000000000 CedarBackup2.testutil
    Package CedarBackup2 :: Module testutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.testutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: testutil.py 1023 2011-10-11 23:44:50Z pronovic $ 
     31  # Purpose  : Provides unit-testing utilities. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides unit-testing utilities.  
     41   
     42  These utilities are kept here, separate from util.py, because they provide 
     43  common functionality that I do not want exported "publicly" once Cedar Backup 
     44  is installed on a system.  They are only used for unit testing, and are only 
     45  useful within the source tree. 
     46   
     47  Many of these functions are in here because they are "good enough" for unit 
     48  test work but are not robust enough to be real public functions.  Others (like 
     49  L{removedir}) do what they are supposed to, but I don't want responsibility for 
     50  making them available to others. 
     51   
     52  @sort: findResources, commandAvailable, 
     53         buildPath, removedir, extractTar, changeFileAge, 
     54         getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, 
     55         platformDebian, platformMacOsX, platformCygwin, platformWindows,  
     56         platformHasEcho, platformSupportsLinks, platformSupportsPermissions, 
     57         platformRequiresBinaryRead 
     58   
     59  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     60  """ 
     61   
     62   
     63  ######################################################################## 
     64  # Imported modules 
     65  ######################################################################## 
     66   
     67  import sys 
     68  import os 
     69  import tarfile 
     70  import time 
     71  import getpass 
     72  import random 
     73  import string # pylint: disable=W0402 
     74  import platform 
     75  import logging 
     76  from StringIO import StringIO 
     77   
     78  from CedarBackup2.util import encodePath, executeCommand 
     79  from CedarBackup2.config import Config, OptionsConfig 
     80  from CedarBackup2.customize import customizeOverrides 
     81  from CedarBackup2.cli import setupPathResolver 
     82   
     83   
     84  ######################################################################## 
     85  # Public functions 
     86  ######################################################################## 
     87   
     88  ############################## 
     89  # setupDebugLogger() function 
     90  ############################## 
     91   
    
    92 -def setupDebugLogger():
    93 """ 94 Sets up a screen logger for debugging purposes. 95 96 Normally, the CLI functionality configures the logger so that 97 things get written to the right place. However, for debugging 98 it's sometimes nice to just get everything -- debug information 99 and output -- dumped to the screen. This function takes care 100 of that. 101 """ 102 logger = logging.getLogger("CedarBackup2") 103 logger.setLevel(logging.DEBUG) # let the logger see all messages 104 formatter = logging.Formatter(fmt="%(message)s") 105 handler = logging.StreamHandler(strm=sys.stdout) 106 handler.setFormatter(formatter) 107 handler.setLevel(logging.DEBUG) 108 logger.addHandler(handler)
    109 110 111 ################# 112 # setupOverrides 113 ################# 114
    115 -def setupOverrides():
    116 """ 117 Set up any platform-specific overrides that might be required. 118 119 When packages are built, this is done manually (hardcoded) in customize.py 120 and the overrides are set up in cli.cli(). This way, no runtime checks need 121 to be done. This is safe, because the package maintainer knows exactly 122 which platform (Debian or not) the package is being built for. 123 124 Unit tests are different, because they might be run anywhere. So, we 125 attempt to make a guess about plaform using platformDebian(), and use that 126 to set up the custom overrides so that platform-specific unit tests continue 127 to work. 128 """ 129 config = Config() 130 config.options = OptionsConfig() 131 if platformDebian(): 132 customizeOverrides(config, platform="debian") 133 else: 134 customizeOverrides(config, platform="standard") 135 setupPathResolver(config)
    136 137 138 ########################### 139 # findResources() function 140 ########################### 141
    142 -def findResources(resources, dataDirs):
    143 """ 144 Returns a dictionary of locations for various resources. 145 @param resources: List of required resources. 146 @param dataDirs: List of data directories to search within for resources. 147 @return: Dictionary mapping resource name to resource path. 148 @raise Exception: If some resource cannot be found. 149 """ 150 mapping = { } 151 for resource in resources: 152 for resourceDir in dataDirs: 153 path = os.path.join(resourceDir, resource) 154 if os.path.exists(path): 155 mapping[resource] = path 156 break 157 else: 158 raise Exception("Unable to find resource [%s]." % resource) 159 return mapping
    160 161 162 ############################## 163 # commandAvailable() function 164 ############################## 165
    166 -def commandAvailable(command):
    167 """ 168 Indicates whether a command is available on $PATH somewhere. 169 This should work on both Windows and UNIX platforms. 170 @param command: Commang to search for 171 @return: Boolean true/false depending on whether command is available. 172 """ 173 if os.environ.has_key("PATH"): 174 for path in os.environ["PATH"].split(os.sep): 175 if os.path.exists(os.path.join(path, command)): 176 return True 177 return False
    178 179 180 ####################### 181 # buildPath() function 182 ####################### 183
    184 -def buildPath(components):
    185 """ 186 Builds a complete path from a list of components. 187 For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. 188 @param components: List of components. 189 @returns: String path constructed from components. 190 @raise ValueError: If a path cannot be encoded properly. 191 """ 192 path = components[0] 193 for component in components[1:]: 194 path = os.path.join(path, component) 195 return encodePath(path)
    196 197 198 ####################### 199 # removedir() function 200 ####################### 201
    202 -def removedir(tree):
    203 """ 204 Recursively removes an entire directory. 205 This is basically taken from an example on python.com. 206 @param tree: Directory tree to remove. 207 @raise ValueError: If a path cannot be encoded properly. 208 """ 209 tree = encodePath(tree) 210 for root, dirs, files in os.walk(tree, topdown=False): 211 for name in files: 212 path = os.path.join(root, name) 213 if os.path.islink(path): 214 os.remove(path) 215 elif os.path.isfile(path): 216 os.remove(path) 217 for name in dirs: 218 path = os.path.join(root, name) 219 if os.path.islink(path): 220 os.remove(path) 221 elif os.path.isdir(path): 222 os.rmdir(path) 223 os.rmdir(tree)
    224 225 226 ######################## 227 # extractTar() function 228 ######################## 229
    230 -def extractTar(tmpdir, filepath):
    231 """ 232 Extracts the indicated tar file to the indicated tmpdir. 233 @param tmpdir: Temp directory to extract to. 234 @param filepath: Path to tarfile to extract. 235 @raise ValueError: If a path cannot be encoded properly. 236 """ 237 # pylint: disable=E1101 238 tmpdir = encodePath(tmpdir) 239 filepath = encodePath(filepath) 240 tar = tarfile.open(filepath) 241 try: 242 tar.format = tarfile.GNU_FORMAT 243 except AttributeError: 244 tar.posix = False 245 for tarinfo in tar: 246 tar.extract(tarinfo, tmpdir)
    247 248 249 ########################### 250 # changeFileAge() function 251 ########################### 252
    253 -def changeFileAge(filename, subtract=None):
    254 """ 255 Changes a file age using the C{os.utime} function. 256 257 @note: Some platforms don't seem to be able to set an age precisely. As a 258 result, whereas we might have intended to set an age of 86400 seconds, we 259 actually get an age of 86399.375 seconds. When util.calculateFileAge() 260 looks at that the file, it calculates an age of 0.999992766204 days, which 261 then gets truncated down to zero whole days. The tests get very confused. 262 To work around this, I always subtract off one additional second as a fudge 263 factor. That way, the file age will be I{at least} as old as requested 264 later on. 265 266 @param filename: File to operate on. 267 @param subtract: Number of seconds to subtract from the current time. 268 @raise ValueError: If a path cannot be encoded properly. 269 """ 270 filename = encodePath(filename) 271 newTime = time.time() - 1 272 if subtract is not None: 273 newTime -= subtract 274 os.utime(filename, (newTime, newTime))
    275 276 277 ########################### 278 # getMaskAsMode() function 279 ########################### 280
    281 -def getMaskAsMode():
    282 """ 283 Returns the user's current umask inverted to a mode. 284 A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. 285 @return: Umask converted to a mode, as an integer. 286 """ 287 umask = os.umask(0777) 288 os.umask(umask) 289 return int(~umask & 0777) # invert, then use only lower bytes
    290 291 292 ###################### 293 # getLogin() function 294 ###################### 295
    296 -def getLogin():
    297 """ 298 Returns the name of the currently-logged in user. This might fail under 299 some circumstances - but if it does, our tests would fail anyway. 300 """ 301 return getpass.getuser()
    302 303 304 ############################ 305 # randomFilename() function 306 ############################ 307
    308 -def randomFilename(length, prefix=None, suffix=None):
    309 """ 310 Generates a random filename with the given length. 311 @param length: Length of filename. 312 @return Random filename. 313 """ 314 characters = [None] * length 315 for i in xrange(length): 316 characters[i] = random.choice(string.ascii_uppercase) 317 if prefix is None: 318 prefix = "" 319 if suffix is None: 320 suffix = "" 321 return "%s%s%s" % (prefix, "".join(characters), suffix)
    322 323 324 #################################### 325 # failUnlessAssignRaises() function 326 #################################### 327
    328 -def failUnlessAssignRaises(testCase, exception, obj, prop, value):
    329 """ 330 Equivalent of C{failUnlessRaises}, but used for property assignments instead. 331 332 It's nice to be able to use C{failUnlessRaises} to check that a method call 333 raises the exception that you expect. Unfortunately, this method can't be 334 used to check Python propery assignments, even though these property 335 assignments are actually implemented underneath as methods. 336 337 This function (which can be easily called by unit test classes) provides an 338 easy way to wrap the assignment checks. It's not pretty, or as intuitive as 339 the original check it's modeled on, but it does work. 340 341 Let's assume you make this method call:: 342 343 testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) 344 345 If you do this, a test case failure will be raised unless the assignment:: 346 347 collectDir.absolutePath = absolutePath 348 349 fails with a C{ValueError} exception. The failure message differentiates 350 between the case where no exception was raised and the case where the wrong 351 exception was raised. 352 353 @note: Internally, the C{missed} and C{instead} variables are used rather 354 than directly calling C{testCase.fail} upon noticing a problem because the 355 act of "failure" itself generates an exception that would be caught by the 356 general C{except} clause. 357 358 @param testCase: PyUnit test case object (i.e. self). 359 @param exception: Exception that is expected to be raised. 360 @param obj: Object whose property is to be assigned to. 361 @param prop: Name of the property, as a string. 362 @param value: Value that is to be assigned to the property. 363 364 @see: C{unittest.TestCase.failUnlessRaises} 365 """ 366 missed = False 367 instead = None 368 try: 369 exec "obj.%s = value" % prop # pylint: disable=W0122 370 missed = True 371 except exception: pass 372 except Exception, e: 373 instead = e 374 if missed: 375 testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) 376 if instead is not None: 377 testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__))
    378 379 380 ########################### 381 # captureOutput() function 382 ########################### 383
    384 -def captureOutput(c):
    385 """ 386 Captures the output (stdout, stderr) of a function or a method. 387 388 Some of our functions don't do anything other than just print output. We 389 need a way to test these functions (at least nominally) but we don't want 390 any of the output spoiling the test suite output. 391 392 This function just creates a dummy file descriptor that can be used as a 393 target by the callable function, rather than C{stdout} or C{stderr}. 394 395 @note: This method assumes that C{callable} doesn't take any arguments 396 besides keyword argument C{fd} to specify the file descriptor. 397 398 @param c: Callable function or method. 399 400 @return: Output of function, as one big string. 401 """ 402 fd = StringIO() 403 c(fd=fd) 404 result = fd.getvalue() 405 fd.close() 406 return result
    407 408 409 ######################### 410 # _isPlatform() function 411 ######################### 412
    413 -def _isPlatform(name):
    414 """ 415 Returns boolean indicating whether we're running on the indicated platform. 416 @param name: Platform name to check, currently one of "windows" or "macosx" 417 """ 418 if name == "windows": 419 return platform.platform(True, True).startswith("Windows") 420 elif name == "macosx": 421 return sys.platform == "darwin" 422 elif name == "debian": 423 return platform.platform(False, False).find("debian") > 0 424 elif name == "cygwin": 425 return platform.platform(True, True).startswith("CYGWIN") 426 else: 427 raise ValueError("Unknown platform [%s]." % name)
    428 429 430 ############################ 431 # platformDebian() function 432 ############################ 433
    434 -def platformDebian():
    435 """ 436 Returns boolean indicating whether this is the Debian platform. 437 """ 438 return _isPlatform("debian")
    439 440 441 ############################ 442 # platformMacOsX() function 443 ############################ 444
    445 -def platformMacOsX():
    446 """ 447 Returns boolean indicating whether this is the Mac OS X platform. 448 """ 449 return _isPlatform("macosx")
    450 451 452 ############################# 453 # platformWindows() function 454 ############################# 455
    456 -def platformWindows():
    457 """ 458 Returns boolean indicating whether this is the Windows platform. 459 """ 460 return _isPlatform("windows")
    461 462 463 ############################ 464 # platformCygwin() function 465 ############################ 466
    467 -def platformCygwin():
    468 """ 469 Returns boolean indicating whether this is the Cygwin platform. 470 """ 471 return _isPlatform("cygwin")
    472 473 474 ################################### 475 # platformSupportsLinks() function 476 ################################### 477 485 486 487 ######################################### 488 # platformSupportsPermissions() function 489 ######################################### 490
    492 """ 493 Returns boolean indicating whether the platform supports UNIX-style file permissions. 494 Some platforms, like Windows, do not support permissions, and tests need to take 495 this into account. 496 """ 497 return not platformWindows()
    498 499 500 ######################################## 501 # platformRequiresBinaryRead() function 502 ######################################## 503
    505 """ 506 Returns boolean indicating whether the platform requires binary reads. 507 Some platforms, like Windows, require a special flag to read binary data 508 from files. 509 """ 510 return platformWindows()
    511 512 513 ############################# 514 # platformHasEcho() function 515 ############################# 516
    517 -def platformHasEcho():
    518 """ 519 Returns boolean indicating whether the platform has a sensible echo command. 520 On some platforms, like Windows, echo doesn't really work for tests. 521 """ 522 return not platformWindows()
    523 524 525 ########################### 526 # runningAsRoot() function 527 ########################### 528
    529 -def runningAsRoot():
    530 """ 531 Returns boolean indicating whether the effective user id is root. 532 This is always true on platforms that have no concept of root, like Windows. 533 """ 534 if platformWindows(): 535 return True 536 else: 537 return os.geteuid() == 0
    538 539 540 ############################## 541 # availableLocales() function 542 ############################## 543
    544 -def availableLocales():
    545 """ 546 Returns a list of available locales on the system 547 @return: List of string locale names 548 """ 549 locales = [] 550 output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] 551 for line in output: 552 locales.append(line.rstrip()) 553 return locales
    554 555 556 #################################### 557 # hexFloatLiteralAllowed() function 558 #################################### 559
    561 """ 562 Indicates whether hex float literals are allowed by the interpreter. 563 564 As far back as 2004, some Python documentation indicated that octal and hex 565 notation applied only to integer literals. However, prior to Python 2.5, it 566 was legal to construct a float with an argument like 0xAC on some platforms. 567 This check provides a an indication of whether the current interpreter 568 supports that behavior. 569 570 This check exists so that unit tests can continue to test the same thing as 571 always for pre-2.5 interpreters (i.e. making sure backwards compatibility 572 doesn't break) while still continuing to work for later interpreters. 573 574 The returned value is True if hex float literals are allowed, False otherwise. 575 """ 576 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5] and not platformWindows(): 577 return True 578 return False
    579

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter-pysrc.html0000664000175000017500000144551612143054365030067 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter
    Package CedarBackup2 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.cdwriter

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Cedar Backup, release 2 
      30  # Revision : $Id: cdwriter.py 1041 2013-05-10 02:05:13Z pronovic $ 
      31  # Purpose  : Provides functionality related to CD writer devices. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides functionality related to CD writer devices. 
      41   
      42  @sort: MediaDefinition, MediaCapacity, CdWriter, 
      43         MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
      44   
      45  @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. 
      46  @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. 
      47  @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. 
      48  @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. 
      49   
      50  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      51  """ 
      52   
      53  ######################################################################## 
      54  # Imported modules 
      55  ######################################################################## 
      56   
      57  # System modules 
      58  import os 
      59  import re 
      60  import logging 
      61  import tempfile 
      62  import time 
      63   
      64  # Cedar Backup modules 
      65  from CedarBackup2.util import resolveCommand, executeCommand 
      66  from CedarBackup2.util import convertSize, displayBytes, encodePath 
      67  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES 
      68  from CedarBackup2.writers.util import validateDevice, validateScsiId, validateDriveSpeed 
      69  from CedarBackup2.writers.util import IsoImage 
      70   
      71   
      72  ######################################################################## 
      73  # Module-wide constants and variables 
      74  ######################################################################## 
      75   
      76  logger = logging.getLogger("CedarBackup2.log.writers.cdwriter") 
      77   
      78  MEDIA_CDRW_74  = 1 
      79  MEDIA_CDR_74   = 2 
      80  MEDIA_CDRW_80  = 3 
      81  MEDIA_CDR_80   = 4 
      82   
      83  CDRECORD_COMMAND = [ "cdrecord", ] 
      84  EJECT_COMMAND    = [ "eject", ] 
      85  MKISOFS_COMMAND  = [ "mkisofs", ] 
    
    86 87 88 ######################################################################## 89 # MediaDefinition class definition 90 ######################################################################## 91 92 -class MediaDefinition(object):
    93 94 """ 95 Class encapsulating information about CD media definitions. 96 97 The following media types are accepted: 98 99 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 100 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 101 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 102 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 103 104 Note that all of the capacities associated with a media definition are in 105 terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. 106 107 @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity 108 """ 109
    110 - def __init__(self, mediaType):
    111 """ 112 Creates a media definition for the indicated media type. 113 @param mediaType: Type of the media, as discussed above. 114 @raise ValueError: If the media type is unknown or unsupported. 115 """ 116 self._mediaType = None 117 self._rewritable = False 118 self._initialLeadIn = 0. 119 self._leadIn = 0.0 120 self._capacity = 0.0 121 self._setValues(mediaType)
    122
    123 - def _setValues(self, mediaType):
    124 """ 125 Sets values based on media type. 126 @param mediaType: Type of the media, as discussed above. 127 @raise ValueError: If the media type is unknown or unsupported. 128 """ 129 if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: 130 raise ValueError("Invalid media type %d." % mediaType) 131 self._mediaType = mediaType 132 self._initialLeadIn = 11400.0 # per cdrecord's documentation 133 self._leadIn = 6900.0 # per cdrecord's documentation 134 if self._mediaType == MEDIA_CDR_74: 135 self._rewritable = False 136 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 137 elif self._mediaType == MEDIA_CDRW_74: 138 self._rewritable = True 139 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 140 elif self._mediaType == MEDIA_CDR_80: 141 self._rewritable = False 142 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) 143 elif self._mediaType == MEDIA_CDRW_80: 144 self._rewritable = True 145 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS)
    146
    147 - def _getMediaType(self):
    148 """ 149 Property target used to get the media type value. 150 """ 151 return self._mediaType
    152
    153 - def _getRewritable(self):
    154 """ 155 Property target used to get the rewritable flag value. 156 """ 157 return self._rewritable
    158
    159 - def _getInitialLeadIn(self):
    160 """ 161 Property target used to get the initial lead-in value. 162 """ 163 return self._initialLeadIn
    164
    165 - def _getLeadIn(self):
    166 """ 167 Property target used to get the lead-in value. 168 """ 169 return self._leadIn
    170
    171 - def _getCapacity(self):
    172 """ 173 Property target used to get the capacity value. 174 """ 175 return self._capacity
    176 177 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 178 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 179 initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") 180 leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") 181 capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.")
    182
    183 184 ######################################################################## 185 # MediaCapacity class definition 186 ######################################################################## 187 188 -class MediaCapacity(object):
    189 190 """ 191 Class encapsulating information about CD media capacity. 192 193 Space used includes the required media lead-in (unless the disk is unused). 194 Space available attempts to provide a picture of how many bytes are 195 available for data storage, including any required lead-in. 196 197 The boundaries value is either C{None} (if multisession discs are not 198 supported or if the disc has no boundaries) or in exactly the form provided 199 by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. 200 201 @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized 202 """ 203
    204 - def __init__(self, bytesUsed, bytesAvailable, boundaries):
    205 """ 206 Initializes a capacity object. 207 @raise IndexError: If the boundaries tuple does not have enough elements. 208 @raise ValueError: If the boundaries values are not integers. 209 @raise ValueError: If the bytes used and available values are not floats. 210 """ 211 self._bytesUsed = float(bytesUsed) 212 self._bytesAvailable = float(bytesAvailable) 213 if boundaries is None: 214 self._boundaries = None 215 else: 216 self._boundaries = (int(boundaries[0]), int(boundaries[1]))
    217
    218 - def __str__(self):
    219 """ 220 Informal string representation for class instance. 221 """ 222 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    223
    224 - def _getBytesUsed(self):
    225 """ 226 Property target to get the bytes-used value. 227 """ 228 return self._bytesUsed
    229
    230 - def _getBytesAvailable(self):
    231 """ 232 Property target to get the bytes-available value. 233 """ 234 return self._bytesAvailable
    235
    236 - def _getBoundaries(self):
    237 """ 238 Property target to get the boundaries tuple. 239 """ 240 return self._boundaries
    241
    242 - def _getTotalCapacity(self):
    243 """ 244 Property target to get the total capacity (used + available). 245 """ 246 return self.bytesUsed + self.bytesAvailable
    247
    248 - def _getUtilized(self):
    249 """ 250 Property target to get the percent of capacity which is utilized. 251 """ 252 if self.bytesAvailable <= 0.0: 253 return 100.0 254 elif self.bytesUsed <= 0.0: 255 return 0.0 256 return (self.bytesUsed / self.totalCapacity) * 100.0
    257 258 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 259 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 260 boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") 261 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 262 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    263
    264 265 ######################################################################## 266 # _ImageProperties class definition 267 ######################################################################## 268 269 -class _ImageProperties(object):
    270 """ 271 Simple value object to hold image properties for C{DvdWriter}. 272 """
    273 - def __init__(self):
    274 self.newDisc = False 275 self.tmpdir = None 276 self.mediaLabel = None 277 self.entries = None # dict mapping path to graft point
    278
    279 280 ######################################################################## 281 # CdWriter class definition 282 ######################################################################## 283 284 -class CdWriter(object):
    285 286 ###################### 287 # Class documentation 288 ###################### 289 290 """ 291 Class representing a device that knows how to write CD media. 292 293 Summary 294 ======= 295 296 This is a class representing a device that knows how to write CD media. It 297 provides common operations for the device, such as ejecting the media, 298 writing an ISO image to the media, or checking for the current media 299 capacity. It also provides a place to store device attributes, such as 300 whether the device supports writing multisession discs, etc. 301 302 This class is implemented in terms of the C{eject} and C{cdrecord} 303 programs, both of which should be available on most UN*X platforms. 304 305 Image Writer Interface 306 ====================== 307 308 The following methods make up the "image writer" interface shared 309 with other kinds of writers (such as DVD writers):: 310 311 __init__ 312 initializeImage() 313 addImageEntry() 314 writeImage() 315 setImageNewDisc() 316 retrieveCapacity() 317 getEstimatedImageSize() 318 319 Only these methods will be used by other Cedar Backup functionality 320 that expects a compatible image writer. 321 322 The media attribute is also assumed to be available. 323 324 Media Types 325 =========== 326 327 This class knows how to write to two different kinds of media, represented 328 by the following constants: 329 330 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 331 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 332 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 333 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 334 335 Most hardware can read and write both 74-minute and 80-minute CD-R and 336 CD-RW media. Some older drives may only be able to write CD-R media. 337 The difference between the two is that CD-RW media can be rewritten 338 (erased), while CD-R media cannot be. 339 340 I do not support any other configurations for a couple of reasons. The 341 first is that I've never tested any other kind of media. The second is 342 that anything other than 74 or 80 minute is apparently non-standard. 343 344 Device Attributes vs. Media Attributes 345 ====================================== 346 347 A given writer instance has two different kinds of attributes associated 348 with it, which I call device attributes and media attributes. Device 349 attributes are things which can be determined without looking at the 350 media, such as whether the drive supports writing multisession disks or 351 has a tray. Media attributes are attributes which vary depending on the 352 state of the media, such as the remaining capacity on a disc. In 353 general, device attributes are available via instance variables and are 354 constant over the life of an object, while media attributes can be 355 retrieved through method calls. 356 357 Talking to Hardware 358 =================== 359 360 This class needs to talk to CD writer hardware in two different ways: 361 through cdrecord to actually write to the media, and through the 362 filesystem to do things like open and close the tray. 363 364 Historically, CdWriter has interacted with cdrecord using the scsiId 365 attribute, and with most other utilities using the device attribute. 366 This changed somewhat in Cedar Backup 2.9.0. 367 368 When Cedar Backup was first written, the only way to interact with 369 cdrecord was by using a SCSI device id. IDE devices were mapped to 370 pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" 371 arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a 372 way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had 373 apparently been deprecated in favor of just addressing the IDE device 374 directly by name, i.e. C{/dev/cdrw}. 375 376 Because of this latest development, it no longer makes sense to require a 377 CdWriter to be created with a SCSI id -- there might not be one. So, the 378 passed-in SCSI id is now optional. Also, there is now a hardwareId 379 attribute. This attribute is filled in with either the SCSI id (if 380 provided) or the device (otherwise). The hardware id is the value that 381 will be passed to cdrecord in the C{dev=} argument. 382 383 Testing 384 ======= 385 386 It's rather difficult to test this code in an automated fashion, even if 387 you have access to a physical CD writer drive. It's even more difficult 388 to test it if you are running on some build daemon (think of a Debian 389 autobuilder) which can't be expected to have any hardware or any media 390 that you could write to. 391 392 Because of this, much of the implementation below is in terms of static 393 methods that are supposed to take defined actions based on their 394 arguments. Public methods are then implemented in terms of a series of 395 calls to simplistic static methods. This way, we can test as much as 396 possible of the functionality via testing the static methods, while 397 hoping that if the static methods are called appropriately, things will 398 work properly. It's not perfect, but it's much better than no testing at 399 all. 400 401 @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, 402 _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, 403 _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, 404 _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, 405 _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, 406 device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, 407 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, 408 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize 409 """ 410 411 ############## 412 # Constructor 413 ############## 414
    415 - def __init__(self, device, scsiId=None, driveSpeed=None, 416 mediaType=MEDIA_CDRW_74, noEject=False, 417 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    418 """ 419 Initializes a CD writer object. 420 421 The current user must have write access to the device at the time the 422 object is instantiated, or an exception will be thrown. However, no 423 media-related validation is done, and in fact there is no need for any 424 media to be in the drive until one of the other media attribute-related 425 methods is called. 426 427 The various instance variables such as C{deviceType}, C{deviceVendor}, 428 etc. might be C{None}, if we're unable to parse this specific information 429 from the C{cdrecord} output. This information is just for reference. 430 431 The SCSI id is optional, but the device path is required. If the SCSI id 432 is passed in, then the hardware id attribute will be taken from the SCSI 433 id. Otherwise, the hardware id will be taken from the device. 434 435 If cdrecord improperly detects whether your writer device has a tray and 436 can be safely opened and closed, then pass in C{noEject=False}. This 437 will override the properties and the device will never be ejected. 438 439 @note: The C{unittest} parameter should never be set to C{True} 440 outside of Cedar Backup code. It is intended for use in unit testing 441 Cedar Backup internals and has no other sensible purpose. 442 443 @param device: Filesystem device associated with this writer. 444 @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} 445 446 @param scsiId: SCSI id for the device (optional). 447 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 448 449 @param driveSpeed: Speed at which the drive writes. 450 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 451 452 @param mediaType: Type of the media that is assumed to be in the drive. 453 @type mediaType: One of the valid media type as discussed above. 454 455 @param noEject: Overrides properties to indicate that the device does not support eject. 456 @type noEject: Boolean true/false 457 458 @param refreshMediaDelay: Refresh media delay to use, if any 459 @type refreshMediaDelay: Number of seconds, an integer >= 0 460 461 @param ejectDelay: Eject delay to use, if any 462 @type ejectDelay: Number of seconds, an integer >= 0 463 464 @param unittest: Turns off certain validations, for use in unit testing. 465 @type unittest: Boolean true/false 466 467 @raise ValueError: If the device is not valid for some reason. 468 @raise ValueError: If the SCSI id is not in a valid form. 469 @raise ValueError: If the drive speed is not an integer >= 1. 470 @raise IOError: If device properties could not be read for some reason. 471 """ 472 self._image = None # optionally filled in by initializeImage() 473 self._device = validateDevice(device, unittest) 474 self._scsiId = validateScsiId(scsiId) 475 self._driveSpeed = validateDriveSpeed(driveSpeed) 476 self._media = MediaDefinition(mediaType) 477 self._noEject = noEject 478 self._refreshMediaDelay = refreshMediaDelay 479 self._ejectDelay = ejectDelay 480 if not unittest: 481 (self._deviceType, 482 self._deviceVendor, 483 self._deviceId, 484 self._deviceBufferSize, 485 self._deviceSupportsMulti, 486 self._deviceHasTray, 487 self._deviceCanEject) = self._retrieveProperties()
    488 489 490 ############# 491 # Properties 492 ############# 493
    494 - def _getDevice(self):
    495 """ 496 Property target used to get the device value. 497 """ 498 return self._device
    499
    500 - def _getScsiId(self):
    501 """ 502 Property target used to get the SCSI id value. 503 """ 504 return self._scsiId
    505
    506 - def _getHardwareId(self):
    507 """ 508 Property target used to get the hardware id value. 509 """ 510 if self._scsiId is None: 511 return self._device 512 return self._scsiId
    513
    514 - def _getDriveSpeed(self):
    515 """ 516 Property target used to get the drive speed. 517 """ 518 return self._driveSpeed
    519
    520 - def _getMedia(self):
    521 """ 522 Property target used to get the media description. 523 """ 524 return self._media
    525
    526 - def _getDeviceType(self):
    527 """ 528 Property target used to get the device type. 529 """ 530 return self._deviceType
    531
    532 - def _getDeviceVendor(self):
    533 """ 534 Property target used to get the device vendor. 535 """ 536 return self._deviceVendor
    537
    538 - def _getDeviceId(self):
    539 """ 540 Property target used to get the device id. 541 """ 542 return self._deviceId
    543
    544 - def _getDeviceBufferSize(self):
    545 """ 546 Property target used to get the device buffer size. 547 """ 548 return self._deviceBufferSize
    549
    550 - def _getDeviceSupportsMulti(self):
    551 """ 552 Property target used to get the device-support-multi flag. 553 """ 554 return self._deviceSupportsMulti
    555
    556 - def _getDeviceHasTray(self):
    557 """ 558 Property target used to get the device-has-tray flag. 559 """ 560 return self._deviceHasTray
    561
    562 - def _getDeviceCanEject(self):
    563 """ 564 Property target used to get the device-can-eject flag. 565 """ 566 return self._deviceCanEject
    567
    568 - def _getRefreshMediaDelay(self):
    569 """ 570 Property target used to get the configured refresh media delay, in seconds. 571 """ 572 return self._refreshMediaDelay
    573
    574 - def _getEjectDelay(self):
    575 """ 576 Property target used to get the configured eject delay, in seconds. 577 """ 578 return self._ejectDelay
    579 580 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 581 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[<method>:]scsibus,target,lun}.") 582 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") 583 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 584 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 585 deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") 586 deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") 587 deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") 588 deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") 589 deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") 590 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 591 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 592 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 593 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 594 595 596 ################################################# 597 # Methods related to device and media attributes 598 ################################################# 599
    600 - def isRewritable(self):
    601 """Indicates whether the media is rewritable per configuration.""" 602 return self._media.rewritable
    603
    604 - def _retrieveProperties(self):
    605 """ 606 Retrieves properties for a device from C{cdrecord}. 607 608 The results are returned as a tuple of the object device attributes as 609 returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, 610 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, 611 deviceCanEject)}. 612 613 @return: Results tuple as described above. 614 @raise IOError: If there is a problem talking to the device. 615 """ 616 args = CdWriter._buildPropertiesArgs(self.hardwareId) 617 command = resolveCommand(CDRECORD_COMMAND) 618 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 619 if result != 0: 620 raise IOError("Error (%d) executing cdrecord command to get properties." % result) 621 return CdWriter._parsePropertiesOutput(output)
    622
    623 - def retrieveCapacity(self, entireDisc=False, useMulti=True):
    624 """ 625 Retrieves capacity for the current media in terms of a C{MediaCapacity} 626 object. 627 628 If C{entireDisc} is passed in as C{True} the capacity will be for the 629 entire disc, as if it were to be rewritten from scratch. If the drive 630 does not support writing multisession discs or if C{useMulti} is passed 631 in as C{False}, the capacity will also be as if the disc were to be 632 rewritten from scratch, but the indicated boundaries value will be 633 C{None}. The same will happen if the disc cannot be read for some 634 reason. Otherwise, the capacity (including the boundaries) will 635 represent whatever space remains on the disc to be filled by future 636 sessions. 637 638 @param entireDisc: Indicates whether to return capacity for entire disc. 639 @type entireDisc: Boolean true/false 640 641 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 642 @type useMulti: Boolean true/false 643 644 @return: C{MediaCapacity} object describing the capacity of the media. 645 @raise IOError: If the media could not be read for some reason. 646 """ 647 boundaries = self._getBoundaries(entireDisc, useMulti) 648 return CdWriter._calculateCapacity(self._media, boundaries)
    649
    650 - def _getBoundaries(self, entireDisc=False, useMulti=True):
    651 """ 652 Gets the ISO boundaries for the media. 653 654 If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, 655 as if the disc were to be rewritten from scratch. If the drive does not 656 support writing multisession discs, the returned value will be C{None}. 657 The same will happen if the disc can't be read for some reason. 658 Otherwise, the returned value will be represent the boundaries of the 659 disc's current contents. 660 661 The results are returned as a tuple of (lower, upper) as needed by the 662 C{IsoImage} class. Note that these values are in terms of ISO sectors, 663 not bytes. Clients should generally consider the boundaries value 664 opaque, however. 665 666 @param entireDisc: Indicates whether to return capacity for entire disc. 667 @type entireDisc: Boolean true/false 668 669 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 670 @type useMulti: Boolean true/false 671 672 @return: Boundaries tuple or C{None}, as described above. 673 @raise IOError: If the media could not be read for some reason. 674 """ 675 if not self._deviceSupportsMulti: 676 logger.debug("Device does not support multisession discs; returning boundaries None.") 677 return None 678 elif not useMulti: 679 logger.debug("Use multisession flag is False; returning boundaries None.") 680 return None 681 elif entireDisc: 682 logger.debug("Entire disc flag is True; returning boundaries None.") 683 return None 684 else: 685 args = CdWriter._buildBoundariesArgs(self.hardwareId) 686 command = resolveCommand(CDRECORD_COMMAND) 687 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 688 if result != 0: 689 logger.debug("Error (%d) executing cdrecord command to get capacity." % result) 690 logger.warn("Unable to read disc (might not be initialized); returning boundaries of None.") 691 return None 692 boundaries = CdWriter._parseBoundariesOutput(output) 693 if boundaries is None: 694 logger.debug("Returning disc boundaries: None") 695 else: 696 logger.debug("Returning disc boundaries: (%d, %d)" % (boundaries[0], boundaries[1])) 697 return boundaries
    698 699 @staticmethod
    700 - def _calculateCapacity(media, boundaries):
    701 """ 702 Calculates capacity for the media in terms of boundaries. 703 704 If C{boundaries} is C{None} or the lower bound is 0 (zero), then the 705 capacity will be for the entire disc minus the initial lead in. 706 Otherwise, capacity will be as if the caller wanted to add an additional 707 session to the end of the existing data on the disc. 708 709 @param media: MediaDescription object describing the media capacity. 710 @param boundaries: Session boundaries as returned from L{_getBoundaries}. 711 712 @return: C{MediaCapacity} object describing the capacity of the media. 713 """ 714 if boundaries is None or boundaries[1] == 0: 715 logger.debug("Capacity calculations are based on a complete disc rewrite.") 716 sectorsAvailable = media.capacity - media.initialLeadIn 717 if sectorsAvailable < 0: sectorsAvailable = 0 718 bytesUsed = 0 719 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 720 else: 721 logger.debug("Capacity calculations are based on a new ISO session.") 722 sectorsAvailable = media.capacity - boundaries[1] - media.leadIn 723 if sectorsAvailable < 0: sectorsAvailable = 0 724 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) 725 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 726 logger.debug("Used [%s], available [%s]." % (displayBytes(bytesUsed), displayBytes(bytesAvailable))) 727 return MediaCapacity(bytesUsed, bytesAvailable, boundaries)
    728 729 730 ####################################################### 731 # Methods used for working with the internal ISO image 732 ####################################################### 733
    734 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    735 """ 736 Initializes the writer's associated ISO image. 737 738 This method initializes the C{image} instance variable so that the caller 739 can use the C{addImageEntry} method. Once entries have been added, the 740 C{writeImage} method can be called with no arguments. 741 742 @param newDisc: Indicates whether the disc should be re-initialized 743 @type newDisc: Boolean true/false. 744 745 @param tmpdir: Temporary directory to use if needed 746 @type tmpdir: String representing a directory path on disk 747 748 @param mediaLabel: Media label to be applied to the image, if any 749 @type mediaLabel: String, no more than 25 characters long 750 """ 751 self._image = _ImageProperties() 752 self._image.newDisc = newDisc 753 self._image.tmpdir = encodePath(tmpdir) 754 self._image.mediaLabel = mediaLabel 755 self._image.entries = {} # mapping from path to graft point (if any)
    756
    757 - def addImageEntry(self, path, graftPoint):
    758 """ 759 Adds a filepath entry to the writer's associated ISO image. 760 761 The contents of the filepath -- but not the path itself -- will be added 762 to the image at the indicated graft point. If you don't want to use a 763 graft point, just pass C{None}. 764 765 @note: Before calling this method, you must call L{initializeImage}. 766 767 @param path: File or directory to be added to the image 768 @type path: String representing a path on disk 769 770 @param graftPoint: Graft point to be used when adding this entry 771 @type graftPoint: String representing a graft point path, as described above 772 773 @raise ValueError: If initializeImage() was not previously called 774 """ 775 if self._image is None: 776 raise ValueError("Must call initializeImage() before using this method.") 777 if not os.path.exists(path): 778 raise ValueError("Path [%s] does not exist." % path) 779 self._image.entries[path] = graftPoint
    780
    781 - def setImageNewDisc(self, newDisc):
    782 """ 783 Resets (overrides) the newDisc flag on the internal image. 784 @param newDisc: New disc flag to set 785 @raise ValueError: If initializeImage() was not previously called 786 """ 787 if self._image is None: 788 raise ValueError("Must call initializeImage() before using this method.") 789 self._image.newDisc = newDisc
    790
    791 - def getEstimatedImageSize(self):
    792 """ 793 Gets the estimated size of the image associated with the writer. 794 @return: Estimated size of the image, in bytes. 795 @raise IOError: If there is a problem calling C{mkisofs}. 796 @raise ValueError: If initializeImage() was not previously called 797 """ 798 if self._image is None: 799 raise ValueError("Must call initializeImage() before using this method.") 800 image = IsoImage() 801 for path in self._image.entries.keys(): 802 image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) 803 return image.getEstimatedSize()
    804 805 806 ###################################### 807 # Methods which expose device actions 808 ###################################### 809
    810 - def openTray(self):
    811 """ 812 Opens the device's tray and leaves it open. 813 814 This only works if the device has a tray and supports ejecting its media. 815 We have no way to know if the tray is currently open or closed, so we 816 just send the appropriate command and hope for the best. If the device 817 does not have a tray or does not support ejecting its media, then we do 818 nothing. 819 820 If the writer was constructed with C{noEject=True}, then this is a no-op. 821 822 Starting with Debian wheezy on my backup hardware, I started seeing 823 consistent problems with the eject command. I couldn't tell whether 824 these problems were due to the device management system or to the new 825 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 826 I was opening and closing the tray too quickly. I worked around that 827 behavior with the new ejectDelay flag. 828 829 Later, I sometimes ran into issues after writing an image to a disc: 830 eject would give errors like "unable to eject, last error: Inappropriate 831 ioctl for device". Various sources online (like Ubuntu bug #875543) 832 suggested that the drive was being locked somehow, and that the 833 workaround was to run 'eject -i off' to unlock it. Sure enough, that 834 fixed the problem for me, so now it's a normal error-handling strategy. 835 836 @raise IOError: If there is an error talking to the device. 837 """ 838 if not self._noEject: 839 if self._deviceHasTray and self._deviceCanEject: 840 args = CdWriter._buildOpenTrayArgs(self._device) 841 result = executeCommand(EJECT_COMMAND, args)[0] 842 if result != 0: 843 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 844 self.unlockTray() 845 result = executeCommand(EJECT_COMMAND, args)[0] 846 if result != 0: 847 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 848 logger.debug("Kludge was apparently successful.") 849 if self.ejectDelay is not None: 850 logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay) 851 time.sleep(self.ejectDelay)
    852
    853 - def unlockTray(self):
    854 """ 855 Unlocks the device's tray. 856 @raise IOError: If there is an error talking to the device. 857 """ 858 args = CdWriter._buildUnlockTrayArgs(self._device) 859 command = resolveCommand(EJECT_COMMAND) 860 result = executeCommand(command, args)[0] 861 if result != 0: 862 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    863
    864 - def closeTray(self):
    865 """ 866 Closes the device's tray. 867 868 This only works if the device has a tray and supports ejecting its media. 869 We have no way to know if the tray is currently open or closed, so we 870 just send the appropriate command and hope for the best. If the device 871 does not have a tray or does not support ejecting its media, then we do 872 nothing. 873 874 If the writer was constructed with C{noEject=True}, then this is a no-op. 875 876 @raise IOError: If there is an error talking to the device. 877 """ 878 if not self._noEject: 879 if self._deviceHasTray and self._deviceCanEject: 880 args = CdWriter._buildCloseTrayArgs(self._device) 881 command = resolveCommand(EJECT_COMMAND) 882 result = executeCommand(command, args)[0] 883 if result != 0: 884 raise IOError("Error (%d) executing eject command to close tray." % result)
    885
    886 - def refreshMedia(self):
    887 """ 888 Opens and then immediately closes the device's tray, to refresh the 889 device's idea of the media. 890 891 Sometimes, a device gets confused about the state of its media. Often, 892 all it takes to solve the problem is to eject the media and then 893 immediately reload it. (There are also configurable eject and refresh 894 media delays which can be applied, for situations where this makes a 895 difference.) 896 897 This only works if the device has a tray and supports ejecting its media. 898 We have no way to know if the tray is currently open or closed, so we 899 just send the appropriate command and hope for the best. If the device 900 does not have a tray or does not support ejecting its media, then we do 901 nothing. The configured delays still apply, though. 902 903 @raise IOError: If there is an error talking to the device. 904 """ 905 self.openTray() 906 self.closeTray() 907 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 908 if self.refreshMediaDelay is not None: 909 logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay) 910 time.sleep(self.refreshMediaDelay) 911 logger.debug("Media refresh complete; hopefully media state is stable now.")
    912
    913 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    914 """ 915 Writes an ISO image to the media in the device. 916 917 If C{newDisc} is passed in as C{True}, we assume that the entire disc 918 will be overwritten, and the media will be blanked before writing it if 919 possible (i.e. if the media is rewritable). 920 921 If C{writeMulti} is passed in as C{True}, then a multisession disc will 922 be written if possible (i.e. if the drive supports writing multisession 923 discs). 924 925 if C{imagePath} is passed in as C{None}, then the existing image 926 configured with C{initializeImage} will be used. Under these 927 circumstances, the passed-in C{newDisc} flag will be ignored. 928 929 By default, we assume that the disc can be written multisession and that 930 we should append to the current contents of the disc. In any case, the 931 ISO image must be generated appropriately (i.e. must take into account 932 any existing session boundaries, etc.) 933 934 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 935 @type imagePath: String representing a path on disk 936 937 @param newDisc: Indicates whether the entire disc will overwritten. 938 @type newDisc: Boolean true/false. 939 940 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 941 @type writeMulti: Boolean true/false 942 943 @raise ValueError: If the image path is not absolute. 944 @raise ValueError: If some path cannot be encoded properly. 945 @raise IOError: If the media could not be written to for some reason. 946 @raise ValueError: If no image is passed in and initializeImage() was not previously called 947 """ 948 if imagePath is None: 949 if self._image is None: 950 raise ValueError("Must call initializeImage() before using this method with no image path.") 951 try: 952 imagePath = self._createImage() 953 self._writeImage(imagePath, writeMulti, self._image.newDisc) 954 finally: 955 if imagePath is not None and os.path.exists(imagePath): 956 try: os.unlink(imagePath) 957 except: pass 958 else: 959 imagePath = encodePath(imagePath) 960 if not os.path.isabs(imagePath): 961 raise ValueError("Image path must be absolute.") 962 self._writeImage(imagePath, writeMulti, newDisc)
    963
    964 - def _createImage(self):
    965 """ 966 Creates an ISO image based on configuration in self._image. 967 @return: Path to the newly-created ISO image on disk. 968 @raise IOError: If there is an error writing the image to disk. 969 @raise ValueError: If there are no filesystem entries in the image 970 @raise ValueError: If a path cannot be encoded properly. 971 """ 972 path = None 973 capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) 974 image = IsoImage(self.device, capacity.boundaries) 975 image.volumeId = self._image.mediaLabel # may be None, which is also valid 976 for key in self._image.entries.keys(): 977 image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) 978 size = image.getEstimatedSize() 979 logger.info("Image size will be %s." % displayBytes(size)) 980 available = capacity.bytesAvailable 981 logger.debug("Media capacity: %s" % displayBytes(available)) 982 if size > available: 983 logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) 984 raise IOError("Media does not contain enough capacity to store image.") 985 try: 986 (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) 987 try: os.close(handle) 988 except: pass 989 image.writeImage(path) 990 logger.debug("Completed creating image [%s]." % path) 991 return path 992 except Exception, e: 993 if path is not None and os.path.exists(path): 994 try: os.unlink(path) 995 except: pass 996 raise e
    997
    998 - def _writeImage(self, imagePath, writeMulti, newDisc):
    999 """ 1000 Write an ISO image to disc using cdrecord. 1001 The disc is blanked first if C{newDisc} is C{True}. 1002 @param imagePath: Path to an ISO image on disk 1003 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 1004 @param newDisc: Indicates whether the entire disc will overwritten. 1005 """ 1006 if newDisc: 1007 self._blankMedia() 1008 args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) 1009 command = resolveCommand(CDRECORD_COMMAND) 1010 result = executeCommand(command, args)[0] 1011 if result != 0: 1012 raise IOError("Error (%d) executing command to write disc." % result) 1013 self.refreshMedia()
    1014
    1015 - def _blankMedia(self):
    1016 """ 1017 Blanks the media in the device, if the media is rewritable. 1018 @raise IOError: If the media could not be written to for some reason. 1019 """ 1020 if self.isRewritable(): 1021 args = CdWriter._buildBlankArgs(self.hardwareId) 1022 command = resolveCommand(CDRECORD_COMMAND) 1023 result = executeCommand(command, args)[0] 1024 if result != 0: 1025 raise IOError("Error (%d) executing command to blank disc." % result) 1026 self.refreshMedia()
    1027 1028 1029 ####################################### 1030 # Methods used to parse command output 1031 ####################################### 1032 1033 @staticmethod
    1034 - def _parsePropertiesOutput(output):
    1035 """ 1036 Parses the output from a C{cdrecord} properties command. 1037 1038 The C{output} parameter should be a list of strings as returned from 1039 C{executeCommand} for a C{cdrecord} command with arguments as from 1040 C{_buildPropertiesArgs}. The list of strings will be parsed to yield 1041 information about the properties of the device. 1042 1043 The output is expected to be a huge long list of strings. Unfortunately, 1044 the strings aren't in a completely regular format. However, the format 1045 of individual lines seems to be regular enough that we can look for 1046 specific values. Two kinds of parsing take place: one kind of parsing 1047 picks out out specific values like the device id, device vendor, etc. 1048 The other kind of parsing just sets a boolean flag C{True} if a matching 1049 line is found. All of the parsing is done with regular expressions. 1050 1051 Right now, pretty much nothing in the output is required and we should 1052 parse an empty document successfully (albeit resulting in a device that 1053 can't eject, doesn't have a tray and doesnt't support multisession 1054 discs). I had briefly considered erroring out if certain lines weren't 1055 found or couldn't be parsed, but that seems like a bad idea given that 1056 most of the information is just for reference. 1057 1058 The results are returned as a tuple of the object device attributes: 1059 C{(deviceType, deviceVendor, deviceId, deviceBufferSize, 1060 deviceSupportsMulti, deviceHasTray, deviceCanEject)}. 1061 1062 @param output: Output from a C{cdrecord -prcap} command. 1063 1064 @return: Results tuple as described above. 1065 @raise IOError: If there is problem parsing the output. 1066 """ 1067 deviceType = None 1068 deviceVendor = None 1069 deviceId = None 1070 deviceBufferSize = None 1071 deviceSupportsMulti = False 1072 deviceHasTray = False 1073 deviceCanEject = False 1074 typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") 1075 vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1076 idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1077 bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") 1078 multiPattern = re.compile(r"^\s*Does read multi-session.*$") 1079 trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") 1080 ejectPattern = re.compile(r"^\s*Does support ejection.*$") 1081 for line in output: 1082 if typePattern.search(line): 1083 deviceType = typePattern.search(line).group(2) 1084 logger.info("Device type is [%s]." % deviceType) 1085 elif vendorPattern.search(line): 1086 deviceVendor = vendorPattern.search(line).group(2) 1087 logger.info("Device vendor is [%s]." % deviceVendor) 1088 elif idPattern.search(line): 1089 deviceId = idPattern.search(line).group(2) 1090 logger.info("Device id is [%s]." % deviceId) 1091 elif bufferPattern.search(line): 1092 try: 1093 sectors = int(bufferPattern.search(line).group(2)) 1094 deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) 1095 logger.info("Device buffer size is [%d] bytes." % deviceBufferSize) 1096 except TypeError: pass 1097 elif multiPattern.search(line): 1098 deviceSupportsMulti = True 1099 logger.info("Device does support multisession discs.") 1100 elif trayPattern.search(line): 1101 deviceHasTray = True 1102 logger.info("Device has a tray.") 1103 elif ejectPattern.search(line): 1104 deviceCanEject = True 1105 logger.info("Device can eject its media.") 1106 return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)
    1107 1108 @staticmethod
    1109 - def _parseBoundariesOutput(output):
    1110 """ 1111 Parses the output from a C{cdrecord} capacity command. 1112 1113 The C{output} parameter should be a list of strings as returned from 1114 C{executeCommand} for a C{cdrecord} command with arguments as from 1115 C{_buildBoundaryArgs}. The list of strings will be parsed to yield 1116 information about the capacity of the media in the device. 1117 1118 Basically, we expect the list of strings to include just one line, a pair 1119 of values. There isn't supposed to be whitespace, but we allow it anyway 1120 in the regular expression. Any lines below the one line we parse are 1121 completely ignored. It would be a good idea to ignore C{stderr} when 1122 executing the C{cdrecord} command that generates output for this method, 1123 because sometimes C{cdrecord} spits out kernel warnings about the actual 1124 output. 1125 1126 The results are returned as a tuple of (lower, upper) as needed by the 1127 C{IsoImage} class. Note that these values are in terms of ISO sectors, 1128 not bytes. Clients should generally consider the boundaries value 1129 opaque, however. 1130 1131 @note: If the boundaries output can't be parsed, we return C{None}. 1132 1133 @param output: Output from a C{cdrecord -msinfo} command. 1134 1135 @return: Boundaries tuple as described above. 1136 @raise IOError: If there is problem parsing the output. 1137 """ 1138 if len(output) < 1: 1139 logger.warn("Unable to read disc (might not be initialized); returning full capacity.") 1140 return None 1141 boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") 1142 parsed = boundaryPattern.search(output[0]) 1143 if not parsed: 1144 raise IOError("Unable to parse output of boundaries command.") 1145 try: 1146 boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) 1147 except TypeError: 1148 raise IOError("Unable to parse output of boundaries command.") 1149 return boundaries
    1150 1151 1152 ################################# 1153 # Methods used to build commands 1154 ################################# 1155 1156 @staticmethod
    1157 - def _buildOpenTrayArgs(device):
    1158 """ 1159 Builds a list of arguments to be passed to a C{eject} command. 1160 1161 The arguments will cause the C{eject} command to open the tray and 1162 eject the media. No validation is done by this method as to whether 1163 this action actually makes sense. 1164 1165 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1166 1167 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1168 """ 1169 args = [] 1170 args.append(device) 1171 return args
    1172 1173 @staticmethod
    1174 - def _buildUnlockTrayArgs(device):
    1175 """ 1176 Builds a list of arguments to be passed to a C{eject} command. 1177 1178 The arguments will cause the C{eject} command to unlock the tray. 1179 1180 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1181 1182 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1183 """ 1184 args = [] 1185 args.append("-i") 1186 args.append("off") 1187 args.append(device) 1188 return args
    1189 1190 @staticmethod
    1191 - def _buildCloseTrayArgs(device):
    1192 """ 1193 Builds a list of arguments to be passed to a C{eject} command. 1194 1195 The arguments will cause the C{eject} command to close the tray and reload 1196 the media. No validation is done by this method as to whether this 1197 action actually makes sense. 1198 1199 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1200 1201 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1202 """ 1203 args = [] 1204 args.append("-t") 1205 args.append(device) 1206 return args
    1207 1208 @staticmethod
    1209 - def _buildPropertiesArgs(hardwareId):
    1210 """ 1211 Builds a list of arguments to be passed to a C{cdrecord} command. 1212 1213 The arguments will cause the C{cdrecord} command to ask the device 1214 for a list of its capacities via the C{-prcap} switch. 1215 1216 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1217 1218 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1219 """ 1220 args = [] 1221 args.append("-prcap") 1222 args.append("dev=%s" % hardwareId) 1223 return args
    1224 1225 @staticmethod
    1226 - def _buildBoundariesArgs(hardwareId):
    1227 """ 1228 Builds a list of arguments to be passed to a C{cdrecord} command. 1229 1230 The arguments will cause the C{cdrecord} command to ask the device for 1231 the current multisession boundaries of the media using the C{-msinfo} 1232 switch. 1233 1234 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1235 1236 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1237 """ 1238 args = [] 1239 args.append("-msinfo") 1240 args.append("dev=%s" % hardwareId) 1241 return args
    1242 1243 @staticmethod
    1244 - def _buildBlankArgs(hardwareId, driveSpeed=None):
    1245 """ 1246 Builds a list of arguments to be passed to a C{cdrecord} command. 1247 1248 The arguments will cause the C{cdrecord} command to blank the media in 1249 the device identified by C{hardwareId}. No validation is done by this method 1250 as to whether the action makes sense (i.e. to whether the media even can 1251 be blanked). 1252 1253 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1254 @param driveSpeed: Speed at which the drive writes. 1255 1256 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1257 """ 1258 args = [] 1259 args.append("-v") 1260 args.append("blank=fast") 1261 if driveSpeed is not None: 1262 args.append("speed=%d" % driveSpeed) 1263 args.append("dev=%s" % hardwareId) 1264 return args
    1265 1266 @staticmethod
    1267 - def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True):
    1268 """ 1269 Builds a list of arguments to be passed to a C{cdrecord} command. 1270 1271 The arguments will cause the C{cdrecord} command to write the indicated 1272 ISO image (C{imagePath}) to the media in the device identified by 1273 C{hardwareId}. The C{writeMulti} argument controls whether to write a 1274 multisession disc. No validation is done by this method as to whether 1275 the action makes sense (i.e. to whether the device even can write 1276 multisession discs, for instance). 1277 1278 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1279 @param imagePath: Path to an ISO image on disk. 1280 @param driveSpeed: Speed at which the drive writes. 1281 @param writeMulti: Indicates whether to write a multisession disc. 1282 1283 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1284 """ 1285 args = [] 1286 args.append("-v") 1287 if driveSpeed is not None: 1288 args.append("speed=%d" % driveSpeed) 1289 args.append("dev=%s" % hardwareId) 1290 if writeMulti: 1291 args.append("-multi") 1292 args.append("-data") 1293 args.append(imagePath) 1294 return args
    1295

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.AbsolutePathList-class.html0000664000175000017500000003640112143054363030700 0ustar pronovicpronovic00000000000000 CedarBackup2.util.AbsolutePathList
    Package CedarBackup2 :: Module util :: Class AbsolutePathList
    [hide private]
    [frames] | no frames]

    Class AbsolutePathList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    AbsolutePathList
    

    Class representing a list of absolute paths.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is an absolute path.

    Each item added to the list is encoded using encodePath. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.rebuild-module.html0000664000175000017500000003060012143054362027734 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.rebuild
    Package CedarBackup2 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Module rebuild

    source code

    Implements the standard 'rebuild' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeRebuild(configPath, options, config)
    Executes the rebuild backup action.
    source code
     
    _findRebuildDirs(config)
    Finds the set of directories to be included in a disc rebuild.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.rebuild")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeRebuild(configPath, options, config)

    source code 

    Executes the rebuild backup action.

    This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action.

    Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.

    _findRebuildDirs(config)

    source code 

    Finds the set of directories to be included in a disc rebuild.

    A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator.

    Parameters:
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If we do not find at least one staging directory.

    CedarBackup2-2.22.0/doc/interface/toc-everything.html0000664000175000017500000020571412143054362024120 0ustar pronovicpronovic00000000000000 Everything

    Everything


    All Classes

    CedarBackup2.cli.Options
    CedarBackup2.config.ActionDependencies
    CedarBackup2.config.ActionHook
    CedarBackup2.config.BlankBehavior
    CedarBackup2.config.ByteQuantity
    CedarBackup2.config.CollectConfig
    CedarBackup2.config.CollectDir
    CedarBackup2.config.CollectFile
    CedarBackup2.config.CommandOverride
    CedarBackup2.config.Config
    CedarBackup2.config.ExtendedAction
    CedarBackup2.config.ExtensionsConfig
    CedarBackup2.config.LocalPeer
    CedarBackup2.config.OptionsConfig
    CedarBackup2.config.PeersConfig
    CedarBackup2.config.PostActionHook
    CedarBackup2.config.PreActionHook
    CedarBackup2.config.PurgeConfig
    CedarBackup2.config.PurgeDir
    CedarBackup2.config.ReferenceConfig
    CedarBackup2.config.RemotePeer
    CedarBackup2.config.StageConfig
    CedarBackup2.config.StoreConfig
    CedarBackup2.extend.capacity.CapacityConfig
    CedarBackup2.extend.capacity.LocalConfig
    CedarBackup2.extend.capacity.PercentageQuantity
    CedarBackup2.extend.encrypt.EncryptConfig
    CedarBackup2.extend.encrypt.LocalConfig
    CedarBackup2.extend.mbox.LocalConfig
    CedarBackup2.extend.mbox.MboxConfig
    CedarBackup2.extend.mbox.MboxDir
    CedarBackup2.extend.mbox.MboxFile
    CedarBackup2.extend.mysql.LocalConfig
    CedarBackup2.extend.mysql.MysqlConfig
    CedarBackup2.extend.postgresql.LocalConfig
    CedarBackup2.extend.postgresql.PostgresqlConfig
    CedarBackup2.extend.split.LocalConfig
    CedarBackup2.extend.split.SplitConfig
    CedarBackup2.extend.subversion.BDBRepository
    CedarBackup2.extend.subversion.FSFSRepository
    CedarBackup2.extend.subversion.LocalConfig
    CedarBackup2.extend.subversion.Repository
    CedarBackup2.extend.subversion.RepositoryDir
    CedarBackup2.extend.subversion.SubversionConfig
    CedarBackup2.filesystem.BackupFileList
    CedarBackup2.filesystem.FilesystemList
    CedarBackup2.filesystem.PurgeItemList
    CedarBackup2.filesystem.SpanItem
    CedarBackup2.peer.LocalPeer
    CedarBackup2.peer.RemotePeer
    CedarBackup2.tools.span.SpanOptions
    CedarBackup2.util.AbsolutePathList
    CedarBackup2.util.Diagnostics
    CedarBackup2.util.DirectedGraph
    CedarBackup2.util.ObjectTypeList
    CedarBackup2.util.PathResolverSingleton
    CedarBackup2.util.Pipe
    CedarBackup2.util.RegexList
    CedarBackup2.util.RegexMatchList
    CedarBackup2.util.RestrictedContentList
    CedarBackup2.util.UnorderedList
    CedarBackup2.writers.cdwriter.CdWriter
    CedarBackup2.writers.cdwriter.MediaCapacity
    CedarBackup2.writers.cdwriter.MediaDefinition
    CedarBackup2.writers.dvdwriter.DvdWriter
    CedarBackup2.writers.dvdwriter.MediaCapacity
    CedarBackup2.writers.dvdwriter.MediaDefinition
    CedarBackup2.writers.util.IsoImage
    CedarBackup2.xmlutil.Serializer

    All Functions

    CedarBackup2.actions.collect.executeCollect
    CedarBackup2.actions.initialize.executeInitialize
    CedarBackup2.actions.purge.executePurge
    CedarBackup2.actions.rebuild.executeRebuild
    CedarBackup2.actions.stage.executeStage
    CedarBackup2.actions.store.consistencyCheck
    CedarBackup2.actions.store.executeStore
    CedarBackup2.actions.store.writeImage
    CedarBackup2.actions.store.writeImageBlankSafe
    CedarBackup2.actions.store.writeStoreIndicator
    CedarBackup2.actions.util.buildMediaLabel
    CedarBackup2.actions.util.checkMediaState
    CedarBackup2.actions.util.createWriter
    CedarBackup2.actions.util.findDailyDirs
    CedarBackup2.actions.util.getBackupFiles
    CedarBackup2.actions.util.initializeMediaState
    CedarBackup2.actions.util.writeIndicatorFile
    CedarBackup2.actions.validate.executeValidate
    CedarBackup2.cli.cli
    CedarBackup2.cli.setupLogging
    CedarBackup2.cli.setupPathResolver
    CedarBackup2.config.addByteQuantityNode
    CedarBackup2.config.readByteQuantity
    CedarBackup2.customize.customizeOverrides
    CedarBackup2.extend.capacity.executeAction
    CedarBackup2.extend.encrypt.executeAction
    CedarBackup2.extend.mbox.executeAction
    CedarBackup2.extend.mysql.backupDatabase
    CedarBackup2.extend.mysql.executeAction
    CedarBackup2.extend.postgresql.backupDatabase
    CedarBackup2.extend.postgresql.executeAction
    CedarBackup2.extend.split.executeAction
    CedarBackup2.extend.subversion.backupBDBRepository
    CedarBackup2.extend.subversion.backupFSFSRepository
    CedarBackup2.extend.subversion.backupRepository
    CedarBackup2.extend.subversion.executeAction
    CedarBackup2.extend.subversion.getYoungestRevision
    CedarBackup2.extend.sysinfo.executeAction
    CedarBackup2.filesystem.compareContents
    CedarBackup2.filesystem.compareDigestMaps
    CedarBackup2.filesystem.normalizeDir
    CedarBackup2.knapsack.alternateFit
    CedarBackup2.knapsack.bestFit
    CedarBackup2.knapsack.firstFit
    CedarBackup2.knapsack.worstFit
    CedarBackup2.testutil.availableLocales
    CedarBackup2.testutil.buildPath
    CedarBackup2.testutil.captureOutput
    CedarBackup2.testutil.changeFileAge
    CedarBackup2.testutil.commandAvailable
    CedarBackup2.testutil.extractTar
    CedarBackup2.testutil.failUnlessAssignRaises
    CedarBackup2.testutil.findResources
    CedarBackup2.testutil.getLogin
    CedarBackup2.testutil.getMaskAsMode
    CedarBackup2.testutil.hexFloatLiteralAllowed
    CedarBackup2.testutil.platformCygwin
    CedarBackup2.testutil.platformDebian
    CedarBackup2.testutil.platformHasEcho
    CedarBackup2.testutil.platformMacOsX
    CedarBackup2.testutil.platformRequiresBinaryRead
    CedarBackup2.testutil.platformSupportsLinks
    CedarBackup2.testutil.platformSupportsPermissions
    CedarBackup2.testutil.platformWindows
    CedarBackup2.testutil.randomFilename
    CedarBackup2.testutil.removedir
    CedarBackup2.testutil.runningAsRoot
    CedarBackup2.testutil.setupDebugLogger
    CedarBackup2.testutil.setupOverrides
    CedarBackup2.tools.span.cli
    CedarBackup2.util.buildNormalizedPath
    CedarBackup2.util.calculateFileAge
    CedarBackup2.util.changeOwnership
    CedarBackup2.util.checkUnique
    CedarBackup2.util.convertSize
    CedarBackup2.util.dereferenceLink
    CedarBackup2.util.deriveDayOfWeek
    CedarBackup2.util.deviceMounted
    CedarBackup2.util.displayBytes
    CedarBackup2.util.encodePath
    CedarBackup2.util.executeCommand
    CedarBackup2.util.getFunctionReference
    CedarBackup2.util.getUidGid
    CedarBackup2.util.isRunningAsRoot
    CedarBackup2.util.isStartOfWeek
    CedarBackup2.util.mount
    CedarBackup2.util.nullDevice
    CedarBackup2.util.parseCommaSeparatedString
    CedarBackup2.util.removeKeys
    CedarBackup2.util.resolveCommand
    CedarBackup2.util.sanitizeEnvironment
    CedarBackup2.util.sortDict
    CedarBackup2.util.splitCommandLine
    CedarBackup2.util.unmount
    CedarBackup2.writers.util.readMediaLabel
    CedarBackup2.writers.util.validateDevice
    CedarBackup2.writers.util.validateDriveSpeed
    CedarBackup2.writers.util.validateScsiId
    CedarBackup2.xmlutil.addBooleanNode
    CedarBackup2.xmlutil.addContainerNode
    CedarBackup2.xmlutil.addIntegerNode
    CedarBackup2.xmlutil.addStringNode
    CedarBackup2.xmlutil.createInputDom
    CedarBackup2.xmlutil.createOutputDom
    CedarBackup2.xmlutil.isElement
    CedarBackup2.xmlutil.readBoolean
    CedarBackup2.xmlutil.readChildren
    CedarBackup2.xmlutil.readFirstChild
    CedarBackup2.xmlutil.readFloat
    CedarBackup2.xmlutil.readInteger
    CedarBackup2.xmlutil.readString
    CedarBackup2.xmlutil.readStringList
    CedarBackup2.xmlutil.serializeDom

    All Variables

    CedarBackup2.action.__package__
    CedarBackup2.actions.collect.__package__
    CedarBackup2.actions.collect.logger
    CedarBackup2.actions.constants.COLLECT_INDICATOR
    CedarBackup2.actions.constants.DIGEST_EXTENSION
    CedarBackup2.actions.constants.DIR_TIME_FORMAT
    CedarBackup2.actions.constants.INDICATOR_PATTERN
    CedarBackup2.actions.constants.STAGE_INDICATOR
    CedarBackup2.actions.constants.STORE_INDICATOR
    CedarBackup2.actions.constants.__package__
    CedarBackup2.actions.initialize.__package__
    CedarBackup2.actions.initialize.logger
    CedarBackup2.actions.purge.__package__
    CedarBackup2.actions.purge.logger
    CedarBackup2.actions.rebuild.__package__
    CedarBackup2.actions.rebuild.logger
    CedarBackup2.actions.stage.__package__
    CedarBackup2.actions.stage.logger
    CedarBackup2.actions.store.__package__
    CedarBackup2.actions.store.logger
    CedarBackup2.actions.util.MEDIA_LABEL_PREFIX
    CedarBackup2.actions.util.__package__
    CedarBackup2.actions.util.logger
    CedarBackup2.actions.validate.__package__
    CedarBackup2.actions.validate.logger
    CedarBackup2.cli.COLLECT_INDEX
    CedarBackup2.cli.COMBINE_ACTIONS
    CedarBackup2.cli.DATE_FORMAT
    CedarBackup2.cli.DEFAULT_CONFIG
    CedarBackup2.cli.DEFAULT_LOGFILE
    CedarBackup2.cli.DEFAULT_MODE
    CedarBackup2.cli.DEFAULT_OWNERSHIP
    CedarBackup2.cli.DISK_LOG_FORMAT
    CedarBackup2.cli.DISK_OUTPUT_FORMAT
    CedarBackup2.cli.INITIALIZE_INDEX
    CedarBackup2.cli.LONG_SWITCHES
    CedarBackup2.cli.NONCOMBINE_ACTIONS
    CedarBackup2.cli.PURGE_INDEX
    CedarBackup2.cli.REBUILD_INDEX
    CedarBackup2.cli.SCREEN_LOG_FORMAT
    CedarBackup2.cli.SCREEN_LOG_STREAM
    CedarBackup2.cli.SHORT_SWITCHES
    CedarBackup2.cli.STAGE_INDEX
    CedarBackup2.cli.STORE_INDEX
    CedarBackup2.cli.VALIDATE_INDEX
    CedarBackup2.cli.VALID_ACTIONS
    CedarBackup2.cli.__package__
    CedarBackup2.cli.logger
    CedarBackup2.config.ACTION_NAME_REGEX
    CedarBackup2.config.DEFAULT_DEVICE_TYPE
    CedarBackup2.config.DEFAULT_MEDIA_TYPE
    CedarBackup2.config.REWRITABLE_MEDIA_TYPES
    CedarBackup2.config.VALID_ARCHIVE_MODES
    CedarBackup2.config.VALID_BLANK_MODES
    CedarBackup2.config.VALID_BYTE_UNITS
    CedarBackup2.config.VALID_CD_MEDIA_TYPES
    CedarBackup2.config.VALID_COLLECT_MODES
    CedarBackup2.config.VALID_COMPRESS_MODES
    CedarBackup2.config.VALID_DEVICE_TYPES
    CedarBackup2.config.VALID_DVD_MEDIA_TYPES
    CedarBackup2.config.VALID_FAILURE_MODES
    CedarBackup2.config.VALID_MEDIA_TYPES
    CedarBackup2.config.VALID_ORDER_MODES
    CedarBackup2.config.__package__
    CedarBackup2.config.logger
    CedarBackup2.customize.DEBIAN_CDRECORD
    CedarBackup2.customize.DEBIAN_MKISOFS
    CedarBackup2.customize.PLATFORM
    CedarBackup2.customize.__package__
    CedarBackup2.customize.logger
    CedarBackup2.extend.capacity.__package__
    CedarBackup2.extend.capacity.logger
    CedarBackup2.extend.encrypt.ENCRYPT_INDICATOR
    CedarBackup2.extend.encrypt.GPG_COMMAND
    CedarBackup2.extend.encrypt.VALID_ENCRYPT_MODES
    CedarBackup2.extend.encrypt.__package__
    CedarBackup2.extend.encrypt.logger
    CedarBackup2.extend.mbox.GREPMAIL_COMMAND
    CedarBackup2.extend.mbox.REVISION_PATH_EXTENSION
    CedarBackup2.extend.mbox.__package__
    CedarBackup2.extend.mbox.logger
    CedarBackup2.extend.mysql.MYSQLDUMP_COMMAND
    CedarBackup2.extend.mysql.__package__
    CedarBackup2.extend.mysql.logger
    CedarBackup2.extend.postgresql.POSTGRESQLDUMPALL_COMMAND
    CedarBackup2.extend.postgresql.POSTGRESQLDUMP_COMMAND
    CedarBackup2.extend.postgresql.__package__
    CedarBackup2.extend.postgresql.logger
    CedarBackup2.extend.split.SPLIT_COMMAND
    CedarBackup2.extend.split.SPLIT_INDICATOR
    CedarBackup2.extend.split.__package__
    CedarBackup2.extend.split.logger
    CedarBackup2.extend.subversion.REVISION_PATH_EXTENSION
    CedarBackup2.extend.subversion.SVNADMIN_COMMAND
    CedarBackup2.extend.subversion.SVNLOOK_COMMAND
    CedarBackup2.extend.subversion.__package__
    CedarBackup2.extend.subversion.logger
    CedarBackup2.extend.sysinfo.DPKG_COMMAND
    CedarBackup2.extend.sysinfo.DPKG_PATH
    CedarBackup2.extend.sysinfo.FDISK_COMMAND
    CedarBackup2.extend.sysinfo.FDISK_PATH
    CedarBackup2.extend.sysinfo.LS_COMMAND
    CedarBackup2.extend.sysinfo.__package__
    CedarBackup2.extend.sysinfo.logger
    CedarBackup2.filesystem.__package__
    CedarBackup2.filesystem.logger
    CedarBackup2.image.__package__
    CedarBackup2.knapsack.__package__
    CedarBackup2.peer.DEF_CBACK_COMMAND
    CedarBackup2.peer.DEF_COLLECT_INDICATOR
    CedarBackup2.peer.DEF_RCP_COMMAND
    CedarBackup2.peer.DEF_RSH_COMMAND
    CedarBackup2.peer.DEF_STAGE_INDICATOR
    CedarBackup2.peer.SU_COMMAND
    CedarBackup2.peer.__package__
    CedarBackup2.peer.logger
    CedarBackup2.release.AUTHOR
    CedarBackup2.release.COPYRIGHT
    CedarBackup2.release.DATE
    CedarBackup2.release.EMAIL
    CedarBackup2.release.URL
    CedarBackup2.release.VERSION
    CedarBackup2.release.__package__
    CedarBackup2.testutil.__package__
    CedarBackup2.tools.span.__package__
    CedarBackup2.tools.span.logger
    CedarBackup2.util.BYTES_PER_GBYTE
    CedarBackup2.util.BYTES_PER_KBYTE
    CedarBackup2.util.BYTES_PER_MBYTE
    CedarBackup2.util.BYTES_PER_SECTOR
    CedarBackup2.util.DEFAULT_LANGUAGE
    CedarBackup2.util.HOURS_PER_DAY
    CedarBackup2.util.ISO_SECTOR_SIZE
    CedarBackup2.util.KBYTES_PER_MBYTE
    CedarBackup2.util.LANG_VAR
    CedarBackup2.util.LOCALE_VARS
    CedarBackup2.util.MBYTES_PER_GBYTE
    CedarBackup2.util.MINUTES_PER_HOUR
    CedarBackup2.util.MOUNT_COMMAND
    CedarBackup2.util.MTAB_FILE
    CedarBackup2.util.SECONDS_PER_DAY
    CedarBackup2.util.SECONDS_PER_MINUTE
    CedarBackup2.util.UMOUNT_COMMAND
    CedarBackup2.util.UNIT_BYTES
    CedarBackup2.util.UNIT_GBYTES
    CedarBackup2.util.UNIT_KBYTES
    CedarBackup2.util.UNIT_MBYTES
    CedarBackup2.util.UNIT_SECTORS
    CedarBackup2.util.__package__
    CedarBackup2.util.logger
    CedarBackup2.util.outputLogger
    CedarBackup2.writer.__package__
    CedarBackup2.writers.cdwriter.CDRECORD_COMMAND
    CedarBackup2.writers.cdwriter.EJECT_COMMAND
    CedarBackup2.writers.cdwriter.MEDIA_CDRW_74
    CedarBackup2.writers.cdwriter.MEDIA_CDRW_80
    CedarBackup2.writers.cdwriter.MEDIA_CDR_74
    CedarBackup2.writers.cdwriter.MEDIA_CDR_80
    CedarBackup2.writers.cdwriter.MKISOFS_COMMAND
    CedarBackup2.writers.cdwriter.__package__
    CedarBackup2.writers.cdwriter.logger
    CedarBackup2.writers.dvdwriter.EJECT_COMMAND
    CedarBackup2.writers.dvdwriter.GROWISOFS_COMMAND
    CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSR
    CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSRW
    CedarBackup2.writers.dvdwriter.__package__
    CedarBackup2.writers.dvdwriter.logger
    CedarBackup2.writers.util.MKISOFS_COMMAND
    CedarBackup2.writers.util.VOLNAME_COMMAND
    CedarBackup2.writers.util.__package__
    CedarBackup2.writers.util.logger
    CedarBackup2.xmlutil.FALSE_BOOLEAN_VALUES
    CedarBackup2.xmlutil.TRUE_BOOLEAN_VALUES
    CedarBackup2.xmlutil.VALID_BOOLEAN_VALUES
    CedarBackup2.xmlutil.__package__
    CedarBackup2.xmlutil.logger

    [hide private] CedarBackup2-2.22.0/doc/interface/module-tree.html0000664000175000017500000002716312143054362023373 0ustar pronovicpronovic00000000000000 Module Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Module Hierarchy

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend-module.html0000664000175000017500000001774012143054362026150 0ustar pronovicpronovic00000000000000 CedarBackup2.extend
    Package CedarBackup2 :: Package extend
    [hide private]
    [frames] | no frames]

    Package extend

    source code

    Official Cedar Backup Extensions

    This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.LocalConfig-class.html0000664000175000017500000013172612143054363032347 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.LocalConfig
    Package CedarBackup2 :: Package extend :: Module subversion :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <subversion> configuration section as the next child of a parent.
    source code
     
    _setSubversion(self, value)
    Property target used to set the subversion configuration value.
    source code
     
    _getSubversion(self)
    Property target used to get the subversion configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSubversion(parent)
    Parses a subversion configuration section.
    source code
     
    _parseRepositories(parent)
    Reads a list of Repository objects from immediately beneath the parent.
    source code
     
    _addRepository(xmlDom, parentNode, repository)
    Adds a repository container as the next child of a parent.
    source code
     
    _parseRepositoryDirs(parent)
    Reads a list of RepositoryDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Adds a repository dir container as the next child of a parent.
    source code
    Properties [hide private]
      subversion
    Subversion configuration in terms of a SubversionConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent SubversionConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <subversion> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/subversion/collectMode
      compressMode   //cb_config/subversion/compressMode
    

    We also add groups of the following items, one list element per item:

      repository     //cb_config/subversion/repository
      repository_dir //cb_config/subversion/repository_dir
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSubversion(self, value)

    source code 

    Property target used to set the subversion configuration value. If not None, the value must be a SubversionConfig object.

    Raises:
    • ValueError - If the value is not a SubversionConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the subversion configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSubversion(parent)
    Static Method

    source code 

    Parses a subversion configuration section.

    We read the following individual fields:

      collectMode    //cb_config/subversion/collect_mode
      compressMode   //cb_config/subversion/compress_mode
    

    We also read groups of the following item, one list element per item:

      repositories    //cb_config/subversion/repository
      repository_dirs //cb_config/subversion/repository_dir
    

    The repositories are parsed by _parseRepositories, and the repository dirs are parsed by _parseRepositoryDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    SubversionConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseRepositories(parent)
    Static Method

    source code 

    Reads a list of Repository objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      repositoryPath          abs_path
      collectMode             collect_mode
      compressMode            compess_mode 
    

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of Repository objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _addRepository(xmlDom, parentNode, repository)
    Static Method

    source code 

    Adds a repository container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository/type
      repositoryPath          repository/abs_path
      collectMode             repository/collect_mode
      compressMode            repository/compress_mode
    

    The <repository> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the SubversionConfig object.

    If repository is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repository - Repository to be added to the document.

    _parseRepositoryDirs(parent)
    Static Method

    source code 

    Reads a list of RepositoryDir objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      directoryPath           abs_path
      collectMode             collect_mode
      compressMode            compess_mode 
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of RepositoryDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Static Method

    source code 

    Adds a repository dir container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository_dir/type
      directoryPath           repository_dir/abs_path
      collectMode             repository_dir/collect_mode
      compressMode            repository_dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <repository_dir> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the SubversionConfig object.

    If repositoryDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repositoryDir - Repository dir to be added to the document.

    Property Details [hide private]

    subversion

    Subversion configuration in terms of a SubversionConfig object.

    Get Method:
    _getSubversion(self) - Property target used to get the subversion configuration value.
    Set Method:
    _setSubversion(self, value) - Property target used to set the subversion configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox.LocalConfig-class.html0000664000175000017500000013007112143054363031105 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.LocalConfig
    Package CedarBackup2 :: Package extend :: Module mbox :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <mbox> configuration section as the next child of a parent.
    source code
     
    _setMbox(self, value)
    Property target used to set the mbox configuration value.
    source code
     
    _getMbox(self)
    Property target used to get the mbox configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMbox(parent)
    Parses an mbox configuration section.
    source code
     
    _parseMboxFiles(parent)
    Reads a list of MboxFile objects from immediately beneath the parent.
    source code
     
    _parseMboxDirs(parent)
    Reads a list of MboxDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addMboxFile(xmlDom, parentNode, mboxFile)
    Adds an mbox file container as the next child of a parent.
    source code
     
    _addMboxDir(xmlDom, parentNode, mboxDir)
    Adds an mbox directory container as the next child of a parent.
    source code
    Properties [hide private]
      mbox
    Mbox configuration in terms of a MboxConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent MboxConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <mbox> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/mbox/collectMode
      compressMode   //cb_config/mbox/compressMode
    

    We also add groups of the following items, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files and mbox directories are added by _addMboxFile and _addMboxDir.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMbox(self, value)

    source code 

    Property target used to set the mbox configuration value. If not None, the value must be a MboxConfig object.

    Raises:
    • ValueError - If the value is not a MboxConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mbox configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMbox(parent)
    Static Method

    source code 

    Parses an mbox configuration section.

    We read the following individual fields:

      collectMode    //cb_config/mbox/collect_mode
      compressMode   //cb_config/mbox/compress_mode
    

    We also read groups of the following item, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files are parsed by _parseMboxFiles and the mbox directories are parsed by _parseMboxDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    MboxConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxFiles(parent)
    Static Method

    source code 

    Reads a list of MboxFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode 
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxDirs(parent)
    Static Method

    source code 

    Reads a list of MboxDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode 
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addMboxFile(xmlDom, parentNode, mboxFile)
    Static Method

    source code 

    Adds an mbox file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            file/abs_path
      collectMode             file/collect_mode
      compressMode            file/compress_mode
    

    The <file> node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the MboxConfig object.

    If mboxFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxFile - MboxFile to be added to the document.

    _addMboxDir(xmlDom, parentNode, mboxDir)
    Static Method

    source code 

    Adds an mbox directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      compressMode            dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the MboxConfig object.

    If mboxDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxDir - MboxDir to be added to the document.

    Property Details [hide private]

    mbox

    Mbox configuration in terms of a MboxConfig object.

    Get Method:
    _getMbox(self) - Property target used to get the mbox configuration value.
    Set Method:
    _setMbox(self, value) - Property target used to set the mbox configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.StoreConfig-class.html0000664000175000017500000021336112143054362030164 0ustar pronovicpronovic00000000000000 CedarBackup2.config.StoreConfig
    Package CedarBackup2 :: Module config :: Class StoreConfig
    [hide private]
    [frames] | no frames]

    Class StoreConfig

    source code

    object --+
             |
            StoreConfig
    

    Class representing a Cedar Backup store configuration.

    The following restrictions exist on data in this class:

    • The source directory must be an absolute path.
    • The media type must be one of the values in VALID_MEDIA_TYPES.
    • The device type must be one of the values in VALID_DEVICE_TYPES.
    • The device path must be an absolute path.
    • The SCSI id, if provided, must be in the form specified by validateScsiId.
    • The drive speed must be an integer >= 1
    • The blanking behavior must be a BlankBehavior object
    • The refresh media delay must be an integer >= 0
    • The eject delay must be an integer >= 0

    Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration.

    Instance Methods [hide private]
     
    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    Constructor for the StoreConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setSourceDir(self, value)
    Property target used to set the source directory.
    source code
     
    _getSourceDir(self)
    Property target used to get the source directory.
    source code
     
    _setMediaType(self, value)
    Property target used to set the media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type.
    source code
     
    _setDeviceType(self, value)
    Property target used to set the device type.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _setDevicePath(self, value)
    Property target used to set the device path.
    source code
     
    _getDevicePath(self)
    Property target used to get the device path.
    source code
     
    _setDeviceScsiId(self, value)
    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.
    source code
     
    _getDeviceScsiId(self)
    Property target used to get the SCSI id.
    source code
     
    _setDriveSpeed(self, value)
    Property target used to set the drive speed.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _setCheckData(self, value)
    Property target used to set the check data flag.
    source code
     
    _getCheckData(self)
    Property target used to get the check data flag.
    source code
     
    _setCheckMedia(self, value)
    Property target used to set the check media flag.
    source code
     
    _getCheckMedia(self)
    Property target used to get the check media flag.
    source code
     
    _setWarnMidnite(self, value)
    Property target used to set the midnite warning flag.
    source code
     
    _getWarnMidnite(self)
    Property target used to get the midnite warning flag.
    source code
     
    _setNoEject(self, value)
    Property target used to set the no-eject flag.
    source code
     
    _getNoEject(self)
    Property target used to get the no-eject flag.
    source code
     
    _setBlankBehavior(self, value)
    Property target used to set blanking behavior configuration.
    source code
     
    _getBlankBehavior(self)
    Property target used to get the blanking behavior configuration.
    source code
     
    _setRefreshMediaDelay(self, value)
    Property target used to set the refreshMediaDelay.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the action refreshMediaDelay.
    source code
     
    _setEjectDelay(self, value)
    Property target used to set the ejectDelay.
    source code
     
    _getEjectDelay(self)
    Property target used to get the action ejectDelay.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sourceDir
    Directory whose contents should be written to media.
      mediaType
    Type of the media (see notes above).
      deviceType
    Type of the device (optional, see notes above).
      devicePath
    Filesystem device name for writer device.
      deviceScsiId
    SCSI id for writer device (optional, see notes above).
      driveSpeed
    Speed of the drive.
      checkData
    Whether resulting image should be validated.
      checkMedia
    Whether media should be checked before being written to.
      warnMidnite
    Whether to generate warnings for crossing midnite.
      noEject
    Indicates that the writer device should not be ejected.
      blankBehavior
    Controls optimized blanking behavior.
      refreshMediaDelay
    Delay, in seconds, to add after refreshing media.
      ejectDelay
    Delay, in seconds, to add after ejecting media before closing the tray

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    (Constructor)

    source code 

    Constructor for the StoreConfig class.

    Parameters:
    • sourceDir - Directory whose contents should be written to media.
    • mediaType - Type of the media (see notes above).
    • deviceType - Type of the device (optional, see notes above).
    • devicePath - Filesystem device name for writer device, i.e. /dev/cdrw.
    • deviceScsiId - SCSI id for writer device, i.e. [<method>:]scsibus,target,lun.
    • driveSpeed - Speed of the drive, i.e. 2 for 2x drive, etc.
    • checkData - Whether resulting image should be validated.
    • checkMedia - Whether media should be checked before being written to.
    • warnMidnite - Whether to generate warnings for crossing midnite.
    • noEject - Indicates that the writer device should not be ejected.
    • blankBehavior - Controls optimized blanking behavior.
    • refreshMediaDelay - Delay, in seconds, to add after refreshing media
    • ejectDelay - Delay, in seconds, to add after ejecting media before closing the tray
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSourceDir(self, value)

    source code 

    Property target used to set the source directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setMediaType(self, value)

    source code 

    Property target used to set the media type. The value must be one of VALID_MEDIA_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDeviceType(self, value)

    source code 

    Property target used to set the device type. The value must be one of VALID_DEVICE_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDevicePath(self, value)

    source code 

    Property target used to set the device path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setDeviceScsiId(self, value)

    source code 

    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    Raises:
    • ValueError - If the value is not valid.

    _setDriveSpeed(self, value)

    source code 

    Property target used to set the drive speed. The drive speed must be valid per validateDriveSpeed.

    Raises:
    • ValueError - If the value is not valid.

    _setCheckData(self, value)

    source code 

    Property target used to set the check data flag. No validations, but we normalize the value to True or False.

    _setCheckMedia(self, value)

    source code 

    Property target used to set the check media flag. No validations, but we normalize the value to True or False.

    _setWarnMidnite(self, value)

    source code 

    Property target used to set the midnite warning flag. No validations, but we normalize the value to True or False.

    _setNoEject(self, value)

    source code 

    Property target used to set the no-eject flag. No validations, but we normalize the value to True or False.

    _setBlankBehavior(self, value)

    source code 

    Property target used to set blanking behavior configuration. If not None, the value must be a BlankBehavior object.

    Raises:
    • ValueError - If the value is not a BlankBehavior

    _setRefreshMediaDelay(self, value)

    source code 

    Property target used to set the refreshMediaDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setEjectDelay(self, value)

    source code 

    Property target used to set the ejectDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    sourceDir

    Directory whose contents should be written to media.

    Get Method:
    _getSourceDir(self) - Property target used to get the source directory.
    Set Method:
    _setSourceDir(self, value) - Property target used to set the source directory.

    mediaType

    Type of the media (see notes above).

    Get Method:
    _getMediaType(self) - Property target used to get the media type.
    Set Method:
    _setMediaType(self, value) - Property target used to set the media type.

    deviceType

    Type of the device (optional, see notes above).

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.
    Set Method:
    _setDeviceType(self, value) - Property target used to set the device type.

    devicePath

    Filesystem device name for writer device.

    Get Method:
    _getDevicePath(self) - Property target used to get the device path.
    Set Method:
    _setDevicePath(self, value) - Property target used to set the device path.

    deviceScsiId

    SCSI id for writer device (optional, see notes above).

    Get Method:
    _getDeviceScsiId(self) - Property target used to get the SCSI id.
    Set Method:
    _setDeviceScsiId(self, value) - Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    driveSpeed

    Speed of the drive.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.
    Set Method:
    _setDriveSpeed(self, value) - Property target used to set the drive speed.

    checkData

    Whether resulting image should be validated.

    Get Method:
    _getCheckData(self) - Property target used to get the check data flag.
    Set Method:
    _setCheckData(self, value) - Property target used to set the check data flag.

    checkMedia

    Whether media should be checked before being written to.

    Get Method:
    _getCheckMedia(self) - Property target used to get the check media flag.
    Set Method:
    _setCheckMedia(self, value) - Property target used to set the check media flag.

    warnMidnite

    Whether to generate warnings for crossing midnite.

    Get Method:
    _getWarnMidnite(self) - Property target used to get the midnite warning flag.
    Set Method:
    _setWarnMidnite(self, value) - Property target used to set the midnite warning flag.

    noEject

    Indicates that the writer device should not be ejected.

    Get Method:
    _getNoEject(self) - Property target used to get the no-eject flag.
    Set Method:
    _setNoEject(self, value) - Property target used to set the no-eject flag.

    blankBehavior

    Controls optimized blanking behavior.

    Get Method:
    _getBlankBehavior(self) - Property target used to get the blanking behavior configuration.
    Set Method:
    _setBlankBehavior(self, value) - Property target used to set blanking behavior configuration.

    refreshMediaDelay

    Delay, in seconds, to add after refreshing media.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the action refreshMediaDelay.
    Set Method:
    _setRefreshMediaDelay(self, value) - Property target used to set the refreshMediaDelay.

    ejectDelay

    Delay, in seconds, to add after ejecting media before closing the tray

    Get Method:
    _getEjectDelay(self) - Property target used to get the action ejectDelay.
    Set Method:
    _setEjectDelay(self, value) - Property target used to set the ejectDelay.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.ObjectTypeList-class.html0000664000175000017500000004247012143054363030360 0ustar pronovicpronovic00000000000000 CedarBackup2.util.ObjectTypeList
    Package CedarBackup2 :: Module util :: Class ObjectTypeList
    [hide private]
    [frames] | no frames]

    Class ObjectTypeList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    ObjectTypeList
    

    Class representing a list containing only objects with a certain type.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in isinstance, which should allow subclasses of of the requested type to be added to the list as well.

    The objectName value will be used in exceptions, i.e. "Item must be a CollectDir object." if objectName is "CollectDir".

    Instance Methods [hide private]
    new empty list
    __init__(self, objectType, objectName)
    Initializes a typed list for a particular type.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, objectType, objectName)
    (Constructor)

    source code 

    Initializes a typed list for a particular type.

    Parameters:
    • objectType - Type that the list elements must match.
    • objectName - Short string containing the "name" of the type.
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.extend

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.PeersConfig-class.html0000664000175000017500000006202512143054362030145 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PeersConfig
    Package CedarBackup2 :: Module config :: Class PeersConfig
    [hide private]
    [frames] | no frames]

    Class PeersConfig

    source code

    object --+
             |
            PeersConfig
    

    Class representing Cedar Backup global peer configuration.

    This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration.

    The following restrictions exist on data in this class:

    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, localPeers=None, remotePeers=None)
    Constructor for the PeersConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the PeersConfig class.

    Parameters:
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem-pysrc.html0000664000175000017500000154513112143054365026724 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem
    Package CedarBackup2 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.filesystem

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Cedar Backup, release 2 
      30  # Revision : $Id: filesystem.py 1022 2011-10-11 23:27:49Z pronovic $ 
      31  # Purpose  : Provides filesystem-related objects. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides filesystem-related objects. 
      41  @sort: FilesystemList, BackupFileList, PurgeItemList 
      42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      43  """ 
      44   
      45   
      46  ######################################################################## 
      47  # Imported modules 
      48  ######################################################################## 
      49   
      50  # System modules 
      51  import os 
      52  import re 
      53  import math 
      54  import logging 
      55  import tarfile 
      56   
      57  # Cedar Backup modules 
      58  from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit 
      59  from CedarBackup2.util import AbsolutePathList, UnorderedList, RegexList 
      60  from CedarBackup2.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink 
      61   
      62   
      63  ######################################################################## 
      64  # Module-wide variables 
      65  ######################################################################## 
      66   
      67  logger = logging.getLogger("CedarBackup2.log.filesystem") 
    
    68 69 70 ######################################################################## 71 # FilesystemList class definition 72 ######################################################################## 73 74 -class FilesystemList(list):
    75 76 ###################### 77 # Class documentation 78 ###################### 79 80 """ 81 Represents a list of filesystem items. 82 83 This is a generic class that represents a list of filesystem items. Callers 84 can add individual files or directories to the list, or can recursively add 85 the contents of a directory. The class also allows for up-front exclusions 86 in several forms (all files, all directories, all items matching a pattern, 87 all items whose basename matches a pattern, or all directories containing a 88 specific "ignore file"). Symbolic links are typically backed up 89 non-recursively, i.e. the link to a directory is backed up, but not the 90 contents of that link (we don't want to deal with recursive loops, etc.). 91 92 The custom methods such as L{addFile} will only add items if they exist on 93 the filesystem and do not match any exclusions that are already in place. 94 However, since a FilesystemList is a subclass of Python's standard list 95 class, callers can also add items to the list in the usual way, using 96 methods like C{append()} or C{insert()}. No validations apply to items 97 added to the list in this way; however, many list-manipulation methods deal 98 "gracefully" with items that don't exist in the filesystem, often by 99 ignoring them. 100 101 Once a list has been created, callers can remove individual items from the 102 list using standard methods like C{pop()} or C{remove()} or they can use 103 custom methods to remove specific types of entries or entries which match a 104 particular pattern. 105 106 @note: Regular expression patterns that apply to paths are assumed to be 107 bounded at front and back by the beginning and end of the string, i.e. they 108 are treated as if they begin with C{^} and end with C{$}. This is true 109 whether we are matching a complete path or a basename. 110 111 @note: Some platforms, like Windows, do not support soft links. On those 112 platforms, the ignore-soft-links flag can be set, but it won't do any good 113 because the operating system never reports a file as a soft link. 114 115 @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, 116 removeLinks, removeMatch, removeInvalid, normalize, 117 excludeFiles, excludeDirs, excludeLinks, excludePaths, 118 excludePatterns, excludeBasenamePatterns, ignoreFile 119 """ 120 121 122 ############## 123 # Constructor 124 ############## 125
    126 - def __init__(self):
    127 """Initializes a list with no configured exclusions.""" 128 list.__init__(self) 129 self._excludeFiles = False 130 self._excludeDirs = False 131 self._excludeLinks = False 132 self._excludePaths = None 133 self._excludePatterns = None 134 self._excludeBasenamePatterns = None 135 self._ignoreFile = None 136 self.excludeFiles = False 137 self.excludeLinks = False 138 self.excludeDirs = False 139 self.excludePaths = [] 140 self.excludePatterns = RegexList() 141 self.excludeBasenamePatterns = RegexList() 142 self.ignoreFile = None
    143 144 145 ############# 146 # Properties 147 ############# 148
    149 - def _setExcludeFiles(self, value):
    150 """ 151 Property target used to set the exclude files flag. 152 No validations, but we normalize the value to C{True} or C{False}. 153 """ 154 if value: 155 self._excludeFiles = True 156 else: 157 self._excludeFiles = False
    158
    159 - def _getExcludeFiles(self):
    160 """ 161 Property target used to get the exclude files flag. 162 """ 163 return self._excludeFiles
    164
    165 - def _setExcludeDirs(self, value):
    166 """ 167 Property target used to set the exclude directories flag. 168 No validations, but we normalize the value to C{True} or C{False}. 169 """ 170 if value: 171 self._excludeDirs = True 172 else: 173 self._excludeDirs = False
    174
    175 - def _getExcludeDirs(self):
    176 """ 177 Property target used to get the exclude directories flag. 178 """ 179 return self._excludeDirs
    180 190 196
    197 - def _setExcludePaths(self, value):
    198 """ 199 Property target used to set the exclude paths list. 200 A C{None} value is converted to an empty list. 201 Elements do not have to exist on disk at the time of assignment. 202 @raise ValueError: If any list element is not an absolute path. 203 """ 204 self._excludePaths = AbsolutePathList() 205 if value is not None: 206 self._excludePaths.extend(value)
    207
    208 - def _getExcludePaths(self):
    209 """ 210 Property target used to get the absolute exclude paths list. 211 """ 212 return self._excludePaths
    213
    214 - def _setExcludePatterns(self, value):
    215 """ 216 Property target used to set the exclude patterns list. 217 A C{None} value is converted to an empty list. 218 """ 219 self._excludePatterns = RegexList() 220 if value is not None: 221 self._excludePatterns.extend(value)
    222
    223 - def _getExcludePatterns(self):
    224 """ 225 Property target used to get the exclude patterns list. 226 """ 227 return self._excludePatterns
    228
    229 - def _setExcludeBasenamePatterns(self, value):
    230 """ 231 Property target used to set the exclude basename patterns list. 232 A C{None} value is converted to an empty list. 233 """ 234 self._excludeBasenamePatterns = RegexList() 235 if value is not None: 236 self._excludeBasenamePatterns.extend(value)
    237
    239 """ 240 Property target used to get the exclude basename patterns list. 241 """ 242 return self._excludeBasenamePatterns
    243
    244 - def _setIgnoreFile(self, value):
    245 """ 246 Property target used to set the ignore file. 247 The value must be a non-empty string if it is not C{None}. 248 @raise ValueError: If the value is an empty string. 249 """ 250 if value is not None: 251 if len(value) < 1: 252 raise ValueError("The ignore file must be a non-empty string.") 253 self._ignoreFile = value
    254
    255 - def _getIgnoreFile(self):
    256 """ 257 Property target used to get the ignore file. 258 """ 259 return self._ignoreFile
    260 261 excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") 262 excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") 263 excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") 264 excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") 265 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, 266 "List of regular expression patterns (matching complete path) to be excluded.") 267 excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, 268 None, "List of regular expression patterns (matching basename) to be excluded.") 269 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") 270 271 272 ############## 273 # Add methods 274 ############## 275
    276 - def addFile(self, path):
    277 """ 278 Adds a file to the list. 279 280 The path must exist and must be a file or a link to an existing file. It 281 will be added to the list subject to any exclusions that are in place. 282 283 @param path: File path to be added to the list 284 @type path: String representing a path on disk 285 286 @return: Number of items added to the list. 287 288 @raise ValueError: If path is not a file or does not exist. 289 @raise ValueError: If the path could not be encoded properly. 290 """ 291 path = encodePath(path) 292 if not os.path.exists(path) or not os.path.isfile(path): 293 logger.debug("Path [%s] is not a file or does not exist on disk." % path) 294 raise ValueError("Path is not a file or does not exist on disk.") 295 if self.excludeLinks and os.path.islink(path): 296 logger.debug("Path [%s] is excluded based on excludeLinks." % path) 297 return 0 298 if self.excludeFiles: 299 logger.debug("Path [%s] is excluded based on excludeFiles." % path) 300 return 0 301 if path in self.excludePaths: 302 logger.debug("Path [%s] is excluded based on excludePaths." % path) 303 return 0 304 for pattern in self.excludePatterns: 305 pattern = encodePath(pattern) # use same encoding as filenames 306 if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList 307 logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) 308 return 0 309 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 310 pattern = encodePath(pattern) # use same encoding as filenames 311 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 312 logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) 313 return 0 314 self.append(path) 315 logger.debug("Added file to list: [%s]" % path) 316 return 1
    317
    318 - def addDir(self, path):
    319 """ 320 Adds a directory to the list. 321 322 The path must exist and must be a directory or a link to an existing 323 directory. It will be added to the list subject to any exclusions that 324 are in place. The L{ignoreFile} does not apply to this method, only to 325 L{addDirContents}. 326 327 @param path: Directory path to be added to the list 328 @type path: String representing a path on disk 329 330 @return: Number of items added to the list. 331 332 @raise ValueError: If path is not a directory or does not exist. 333 @raise ValueError: If the path could not be encoded properly. 334 """ 335 path = encodePath(path) 336 path = normalizeDir(path) 337 if not os.path.exists(path) or not os.path.isdir(path): 338 logger.debug("Path [%s] is not a directory or does not exist on disk." % path) 339 raise ValueError("Path is not a directory or does not exist on disk.") 340 if self.excludeLinks and os.path.islink(path): 341 logger.debug("Path [%s] is excluded based on excludeLinks." % path) 342 return 0 343 if self.excludeDirs: 344 logger.debug("Path [%s] is excluded based on excludeDirs." % path) 345 return 0 346 if path in self.excludePaths: 347 logger.debug("Path [%s] is excluded based on excludePaths." % path) 348 return 0 349 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 350 pattern = encodePath(pattern) # use same encoding as filenames 351 if re.compile(r"^%s$" % pattern).match(path): 352 logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) 353 return 0 354 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 355 pattern = encodePath(pattern) # use same encoding as filenames 356 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 357 logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) 358 return 0 359 self.append(path) 360 logger.debug("Added directory to list: [%s]" % path) 361 return 1
    362
    363 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    364 """ 365 Adds the contents of a directory to the list. 366 367 The path must exist and must be a directory or a link to a directory. 368 The contents of the directory (as well as the directory path itself) will 369 be recursively added to the list, subject to any exclusions that are in 370 place. If you only want the directory and its immediate contents to be 371 added, then pass in C{recursive=False}. 372 373 @note: If a directory's absolute path matches an exclude pattern or path, 374 or if the directory contains the configured ignore file, then the 375 directory and all of its contents will be recursively excluded from the 376 list. 377 378 @note: If the passed-in directory happens to be a soft link, it will be 379 recursed. However, the linkDepth parameter controls whether any soft 380 links I{within} the directory will be recursed. The link depth is 381 maximum depth of the tree at which soft links should be followed. So, a 382 depth of 0 does not follow any soft links, a depth of 1 follows only 383 links within the passed-in directory, a depth of 2 follows the links at 384 the next level down, etc. 385 386 @note: Any invalid soft links (i.e. soft links that point to 387 non-existent items) will be silently ignored. 388 389 @note: The L{excludeDirs} flag only controls whether any given directory 390 path itself is added to the list once it has been discovered. It does 391 I{not} modify any behavior related to directory recursion. 392 393 @note: If you call this method I{on a link to a directory} that link will 394 never be dereferenced (it may, however, be followed). 395 396 @param path: Directory path whose contents should be added to the list 397 @type path: String representing a path on disk 398 399 @param recursive: Indicates whether directory contents should be added recursively. 400 @type recursive: Boolean value 401 402 @param addSelf: Indicates whether the directory itself should be added to the list. 403 @type addSelf: Boolean value 404 405 @param linkDepth: Maximum depth of the tree at which soft links should be followed 406 @type linkDepth: Integer value, where zero means not to follow any soft links 407 408 @param dereference: Indicates whether soft links, if followed, should be dereferenced 409 @type dereference: Boolean value 410 411 @return: Number of items recursively added to the list 412 413 @raise ValueError: If path is not a directory or does not exist. 414 @raise ValueError: If the path could not be encoded properly. 415 """ 416 path = encodePath(path) 417 path = normalizeDir(path) 418 return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference)
    419
    420 - def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False):
    421 """ 422 Internal implementation of C{addDirContents}. 423 424 This internal implementation exists due to some refactoring. Basically, 425 some subclasses have a need to add the contents of a directory, but not 426 the directory itself. This is different than the standard C{FilesystemList} 427 behavior and actually ends up making a special case out of the first 428 call in the recursive chain. Since I don't want to expose the modified 429 interface, C{addDirContents} ends up being wholly implemented in terms 430 of this method. 431 432 The linkDepth parameter controls whether soft links are followed when we 433 are adding the contents recursively. Any recursive calls reduce the 434 value by one. If the value zero or less, then soft links will just be 435 added as directories, but will not be followed. This means that links 436 are followed to a I{constant depth} starting from the top-most directory. 437 438 There is one difference between soft links and directories: soft links 439 that are added recursively are not placed into the list explicitly. This 440 is because if we do add the links recursively, the resulting tar file 441 gets a little confused (it has a link and a directory with the same 442 name). 443 444 @note: If you call this method I{on a link to a directory} that link will 445 never be dereferenced (it may, however, be followed). 446 447 @param path: Directory path whose contents should be added to the list. 448 @param includePath: Indicates whether to include the path as well as contents. 449 @param recursive: Indicates whether directory contents should be added recursively. 450 @param linkDepth: Depth of soft links that should be followed 451 @param dereference: Indicates whether soft links, if followed, should be dereferenced 452 453 @return: Number of items recursively added to the list 454 455 @raise ValueError: If path is not a directory or does not exist. 456 """ 457 added = 0 458 if not os.path.exists(path) or not os.path.isdir(path): 459 logger.debug("Path [%s] is not a directory or does not exist on disk." % path) 460 raise ValueError("Path is not a directory or does not exist on disk.") 461 if path in self.excludePaths: 462 logger.debug("Path [%s] is excluded based on excludePaths." % path) 463 return added 464 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 465 pattern = encodePath(pattern) # use same encoding as filenames 466 if re.compile(r"^%s$" % pattern).match(path): 467 logger.debug("Path [%s] is excluded based on pattern [%s]." % (path, pattern)) 468 return added 469 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 470 pattern = encodePath(pattern) # use same encoding as filenames 471 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 472 logger.debug("Path [%s] is excluded based on basename pattern [%s]." % (path, pattern)) 473 return added 474 if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): 475 logger.debug("Path [%s] is excluded based on ignore file." % path) 476 return added 477 if includePath: 478 added += self.addDir(path) # could actually be excluded by addDir, yet 479 for entry in os.listdir(path): 480 entrypath = os.path.join(path, entry) 481 if os.path.isfile(entrypath): 482 if linkDepth > 0 and dereference: 483 derefpath = dereferenceLink(entrypath) 484 if derefpath != entrypath: 485 added += self.addFile(derefpath) 486 added += self.addFile(entrypath) 487 elif os.path.isdir(entrypath): 488 if os.path.islink(entrypath): 489 if recursive: 490 if linkDepth > 0: 491 newDepth = linkDepth - 1 492 if dereference: 493 derefpath = dereferenceLink(entrypath) 494 if derefpath != entrypath: 495 added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) 496 added += self.addDir(entrypath) 497 else: 498 added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) 499 else: 500 added += self.addDir(entrypath) 501 else: 502 added += self.addDir(entrypath) 503 else: 504 if recursive: 505 newDepth = linkDepth - 1 506 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) 507 else: 508 added += self.addDir(entrypath) 509 return added
    510 511 512 ################# 513 # Remove methods 514 ################# 515
    516 - def removeFiles(self, pattern=None):
    517 """ 518 Removes file entries from the list. 519 520 If C{pattern} is not passed in or is C{None}, then all file entries will 521 be removed from the list. Otherwise, only those file entries matching 522 the pattern will be removed. Any entry which does not exist on disk 523 will be ignored (use L{removeInvalid} to purge those entries). 524 525 This method might be fairly slow for large lists, since it must check the 526 type of each item in the list. If you know ahead of time that you want 527 to exclude all files, then you will be better off setting L{excludeFiles} 528 to C{True} before adding items to the list. 529 530 @param pattern: Regular expression pattern representing entries to remove 531 532 @return: Number of entries removed 533 @raise ValueError: If the passed-in pattern is not a valid regular expression. 534 """ 535 removed = 0 536 if pattern is None: 537 for entry in self[:]: 538 if os.path.exists(entry) and os.path.isfile(entry): 539 self.remove(entry) 540 logger.debug("Removed path [%s] from list." % entry) 541 removed += 1 542 else: 543 try: 544 pattern = encodePath(pattern) # use same encoding as filenames 545 compiled = re.compile(pattern) 546 except re.error: 547 raise ValueError("Pattern is not a valid regular expression.") 548 for entry in self[:]: 549 if os.path.exists(entry) and os.path.isfile(entry): 550 if compiled.match(entry): 551 self.remove(entry) 552 logger.debug("Removed path [%s] from list." % entry) 553 removed += 1 554 logger.debug("Removed a total of %d entries." % removed) 555 return removed
    556
    557 - def removeDirs(self, pattern=None):
    558 """ 559 Removes directory entries from the list. 560 561 If C{pattern} is not passed in or is C{None}, then all directory entries 562 will be removed from the list. Otherwise, only those directory entries 563 matching the pattern will be removed. Any entry which does not exist on 564 disk will be ignored (use L{removeInvalid} to purge those entries). 565 566 This method might be fairly slow for large lists, since it must check the 567 type of each item in the list. If you know ahead of time that you want 568 to exclude all directories, then you will be better off setting 569 L{excludeDirs} to C{True} before adding items to the list (note that this 570 will not prevent you from recursively adding the I{contents} of 571 directories). 572 573 @param pattern: Regular expression pattern representing entries to remove 574 575 @return: Number of entries removed 576 @raise ValueError: If the passed-in pattern is not a valid regular expression. 577 """ 578 removed = 0 579 if pattern is None: 580 for entry in self[:]: 581 if os.path.exists(entry) and os.path.isdir(entry): 582 self.remove(entry) 583 logger.debug("Removed path [%s] from list." % entry) 584 removed += 1 585 else: 586 try: 587 pattern = encodePath(pattern) # use same encoding as filenames 588 compiled = re.compile(pattern) 589 except re.error: 590 raise ValueError("Pattern is not a valid regular expression.") 591 for entry in self[:]: 592 if os.path.exists(entry) and os.path.isdir(entry): 593 if compiled.match(entry): 594 self.remove(entry) 595 logger.debug("Removed path [%s] from list based on pattern [%s]." % (entry, pattern)) 596 removed += 1 597 logger.debug("Removed a total of %d entries." % removed) 598 return removed
    599 640
    641 - def removeMatch(self, pattern):
    642 """ 643 Removes from the list all entries matching a pattern. 644 645 This method removes from the list all entries which match the passed in 646 C{pattern}. Since there is no need to check the type of each entry, it 647 is faster to call this method than to call the L{removeFiles}, 648 L{removeDirs} or L{removeLinks} methods individually. If you know which 649 patterns you will want to remove ahead of time, you may be better off 650 setting L{excludePatterns} or L{excludeBasenamePatterns} before adding 651 items to the list. 652 653 @note: Unlike when using the exclude lists, the pattern here is I{not} 654 bounded at the front and the back of the string. You can use any pattern 655 you want. 656 657 @param pattern: Regular expression pattern representing entries to remove 658 659 @return: Number of entries removed. 660 @raise ValueError: If the passed-in pattern is not a valid regular expression. 661 """ 662 try: 663 pattern = encodePath(pattern) # use same encoding as filenames 664 compiled = re.compile(pattern) 665 except re.error: 666 raise ValueError("Pattern is not a valid regular expression.") 667 removed = 0 668 for entry in self[:]: 669 if compiled.match(entry): 670 self.remove(entry) 671 logger.debug("Removed path [%s] from list based on pattern [%s]." % (entry, pattern)) 672 removed += 1 673 logger.debug("Removed a total of %d entries." % removed) 674 return removed
    675
    676 - def removeInvalid(self):
    677 """ 678 Removes from the list all entries that do not exist on disk. 679 680 This method removes from the list all entries which do not currently 681 exist on disk in some form. No attention is paid to whether the entries 682 are files or directories. 683 684 @return: Number of entries removed. 685 """ 686 removed = 0 687 for entry in self[:]: 688 if not os.path.exists(entry): 689 self.remove(entry) 690 logger.debug("Removed path [%s] from list." % entry) 691 removed += 1 692 logger.debug("Removed a total of %d entries." % removed) 693 return removed
    694 695 696 ################## 697 # Utility methods 698 ################## 699
    700 - def normalize(self):
    701 """Normalizes the list, ensuring that each entry is unique.""" 702 orig = len(self) 703 self.sort() 704 dups = filter(lambda x, self=self: self[x] == self[x+1], range(0, len(self) - 1)) 705 items = map(lambda x, self=self: self[x], dups) 706 map(self.remove, items) 707 new = len(self) 708 logger.debug("Completed normalizing list; removed %d items (%d originally, %d now)." % (new-orig, orig, new))
    709
    710 - def verify(self):
    711 """ 712 Verifies that all entries in the list exist on disk. 713 @return: C{True} if all entries exist, C{False} otherwise. 714 """ 715 for entry in self: 716 if not os.path.exists(entry): 717 logger.debug("Path [%s] is invalid; list is not valid." % entry) 718 return False 719 logger.debug("All entries in list are valid.") 720 return True
    721
    722 723 ######################################################################## 724 # SpanItem class definition 725 ######################################################################## 726 727 -class SpanItem(object): # pylint: disable=R0903
    728 """ 729 Item returned by L{BackupFileList.generateSpan}. 730 """
    731 - def __init__(self, fileList, size, capacity, utilization):
    732 """ 733 Create object. 734 @param fileList: List of files 735 @param size: Size (in bytes) of files 736 @param utilization: Utilization, as a percentage (0-100) 737 """ 738 self.fileList = fileList 739 self.size = size 740 self.capacity = capacity 741 self.utilization = utilization
    742
    743 744 ######################################################################## 745 # BackupFileList class definition 746 ######################################################################## 747 748 -class BackupFileList(FilesystemList): # pylint: disable=R0904
    749 750 ###################### 751 # Class documentation 752 ###################### 753 754 """ 755 List of files to be backed up. 756 757 A BackupFileList is a L{FilesystemList} containing a list of files to be 758 backed up. It only contains files, not directories (soft links are treated 759 like files). On top of the generic functionality provided by 760 L{FilesystemList}, this class adds functionality to keep a hash (checksum) 761 for each file in the list, and it also provides a method to calculate the 762 total size of the files in the list and a way to export the list into tar 763 form. 764 765 @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, 766 generateFitted, generateTarfile, removeUnchanged 767 """ 768 769 ############## 770 # Constructor 771 ############## 772
    773 - def __init__(self):
    774 """Initializes a list with no configured exclusions.""" 775 FilesystemList.__init__(self)
    776 777 778 ################################ 779 # Overridden superclass methods 780 ################################ 781
    782 - def addDir(self, path):
    783 """ 784 Adds a directory to the list. 785 786 Note that this class does not allow directories to be added by themselves 787 (a backup list contains only files). However, since links to directories 788 are technically files, we allow them to be added. 789 790 This method is implemented in terms of the superclass method, with one 791 additional validation: the superclass method is only called if the 792 passed-in path is both a directory and a link. All of the superclass's 793 existing validations and restrictions apply. 794 795 @param path: Directory path to be added to the list 796 @type path: String representing a path on disk 797 798 @return: Number of items added to the list. 799 800 @raise ValueError: If path is not a directory or does not exist. 801 @raise ValueError: If the path could not be encoded properly. 802 """ 803 path = encodePath(path) 804 path = normalizeDir(path) 805 if os.path.isdir(path) and not os.path.islink(path): 806 return 0 807 else: 808 return FilesystemList.addDir(self, path)
    809 810 811 ################## 812 # Utility methods 813 ################## 814
    815 - def totalSize(self):
    816 """ 817 Returns the total size among all files in the list. 818 Only files are counted. 819 Soft links that point at files are ignored. 820 Entries which do not exist on disk are ignored. 821 @return: Total size, in bytes 822 """ 823 total = 0.0 824 for entry in self: 825 if os.path.isfile(entry) and not os.path.islink(entry): 826 total += float(os.stat(entry).st_size) 827 return total
    828
    829 - def generateSizeMap(self):
    830 """ 831 Generates a mapping from file to file size in bytes. 832 The mapping does include soft links, which are listed with size zero. 833 Entries which do not exist on disk are ignored. 834 @return: Dictionary mapping file to file size 835 """ 836 table = { } 837 for entry in self: 838 if os.path.islink(entry): 839 table[entry] = 0.0 840 elif os.path.isfile(entry): 841 table[entry] = float(os.stat(entry).st_size) 842 return table
    843
    844 - def generateDigestMap(self, stripPrefix=None):
    845 """ 846 Generates a mapping from file to file digest. 847 848 Currently, the digest is an SHA hash, which should be pretty secure. In 849 the future, this might be a different kind of hash, but we guarantee that 850 the type of the hash will not change unless the library major version 851 number is bumped. 852 853 Entries which do not exist on disk are ignored. 854 855 Soft links are ignored. We would end up generating a digest for the file 856 that the soft link points at, which doesn't make any sense. 857 858 If C{stripPrefix} is passed in, then that prefix will be stripped from 859 each key when the map is generated. This can be useful in generating two 860 "relative" digest maps to be compared to one another. 861 862 @param stripPrefix: Common prefix to be stripped from paths 863 @type stripPrefix: String with any contents 864 865 @return: Dictionary mapping file to digest value 866 @see: L{removeUnchanged} 867 """ 868 table = { } 869 if stripPrefix is not None: 870 for entry in self: 871 if os.path.isfile(entry) and not os.path.islink(entry): 872 table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) 873 else: 874 for entry in self: 875 if os.path.isfile(entry) and not os.path.islink(entry): 876 table[entry] = BackupFileList._generateDigest(entry) 877 return table
    878 879 @staticmethod
    880 - def _generateDigest(path):
    881 """ 882 Generates an SHA digest for a given file on disk. 883 884 The original code for this function used this simplistic implementation, 885 which requires reading the entire file into memory at once in order to 886 generate a digest value:: 887 888 sha.new(open(path).read()).hexdigest() 889 890 Not surprisingly, this isn't an optimal solution. The U{Simple file 891 hashing <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/259109>} 892 Python Cookbook recipe describes how to incrementally generate a hash 893 value by reading in chunks of data rather than reading the file all at 894 once. The recipe relies on the the C{update()} method of the various 895 Python hashing algorithms. 896 897 In my tests using a 110 MB file on CD, the original implementation 898 requires 111 seconds. This implementation requires only 40-45 seconds, 899 which is a pretty substantial speed-up. 900 901 Experience shows that reading in around 4kB (4096 bytes) at a time yields 902 the best performance. Smaller reads are quite a bit slower, and larger 903 reads don't make much of a difference. The 4kB number makes me a little 904 suspicious, and I think it might be related to the size of a filesystem 905 read at the hardware level. However, I've decided to just hardcode 4096 906 until I have evidence that shows it's worthwhile making the read size 907 configurable. 908 909 @param path: Path to generate digest for. 910 911 @return: ASCII-safe SHA digest for the file. 912 @raise OSError: If the file cannot be opened. 913 """ 914 # pylint: disable=C0103 915 try: 916 import hashlib 917 s = hashlib.sha1() 918 except ImportError: 919 import sha 920 s = sha.new() 921 f = open(path, mode="rb") # in case platform cares about binary reads 922 readBytes = 4096 # see notes above 923 while(readBytes > 0): 924 readString = f.read(readBytes) 925 s.update(readString) 926 readBytes = len(readString) 927 f.close() 928 digest = s.hexdigest() 929 logger.debug("Generated digest [%s] for file [%s]." % (digest, path)) 930 return digest
    931
    932 - def generateFitted(self, capacity, algorithm="worst_fit"):
    933 """ 934 Generates a list of items that fit in the indicated capacity. 935 936 Sometimes, callers would like to include every item in a list, but are 937 unable to because not all of the items fit in the space available. This 938 method returns a copy of the list, containing only the items that fit in 939 a given capacity. A copy is returned so that we don't lose any 940 information if for some reason the fitted list is unsatisfactory. 941 942 The fitting is done using the functions in the knapsack module. By 943 default, the first fit algorithm is used, but you can also choose 944 from best fit, worst fit and alternate fit. 945 946 @param capacity: Maximum capacity among the files in the new list 947 @type capacity: Integer, in bytes 948 949 @param algorithm: Knapsack (fit) algorithm to use 950 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 951 952 @return: Copy of list with total size no larger than indicated capacity 953 @raise ValueError: If the algorithm is invalid. 954 """ 955 table = self._getKnapsackTable() 956 function = BackupFileList._getKnapsackFunction(algorithm) 957 return function(table, capacity)[0]
    958
    959 - def generateSpan(self, capacity, algorithm="worst_fit"):
    960 """ 961 Splits the list of items into sub-lists that fit in a given capacity. 962 963 Sometimes, callers need split to a backup file list into a set of smaller 964 lists. For instance, you could use this to "span" the files across a set 965 of discs. 966 967 The fitting is done using the functions in the knapsack module. By 968 default, the first fit algorithm is used, but you can also choose 969 from best fit, worst fit and alternate fit. 970 971 @note: If any of your items are larger than the capacity, then it won't 972 be possible to find a solution. In this case, a value error will be 973 raised. 974 975 @param capacity: Maximum capacity among the files in the new list 976 @type capacity: Integer, in bytes 977 978 @param algorithm: Knapsack (fit) algorithm to use 979 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 980 981 @return: List of L{SpanItem} objects. 982 983 @raise ValueError: If the algorithm is invalid. 984 @raise ValueError: If it's not possible to fit some items 985 """ 986 spanItems = [] 987 function = BackupFileList._getKnapsackFunction(algorithm) 988 table = self._getKnapsackTable(capacity) 989 iteration = 0 990 while len(table) > 0: 991 iteration += 1 992 fit = function(table, capacity) 993 if len(fit[0]) == 0: 994 # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe 995 raise ValueError("After iteration %d, unable to add any new items." % iteration) 996 removeKeys(table, fit[0]) 997 utilization = (float(fit[1])/float(capacity))*100.0 998 item = SpanItem(fit[0], fit[1], capacity, utilization) 999 spanItems.append(item) 1000 return spanItems
    1001
    1002 - def _getKnapsackTable(self, capacity=None):
    1003 """ 1004 Converts the list into the form needed by the knapsack algorithms. 1005 @return: Dictionary mapping file name to tuple of (file path, file size). 1006 """ 1007 table = { } 1008 for entry in self: 1009 if os.path.islink(entry): 1010 table[entry] = (entry, 0.0) 1011 elif os.path.isfile(entry): 1012 size = float(os.stat(entry).st_size) 1013 if capacity is not None: 1014 if size > capacity: 1015 raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) 1016 table[entry] = (entry, size) 1017 return table
    1018 1019 @staticmethod
    1020 - def _getKnapsackFunction(algorithm):
    1021 """ 1022 Returns a reference to the function associated with an algorithm name. 1023 Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" 1024 @param algorithm: Name of the algorithm 1025 @return: Reference to knapsack function 1026 @raise ValueError: If the algorithm name is unknown. 1027 """ 1028 if algorithm == "first_fit": 1029 return firstFit 1030 elif algorithm == "best_fit": 1031 return bestFit 1032 elif algorithm == "worst_fit": 1033 return worstFit 1034 elif algorithm == "alternate_fit": 1035 return alternateFit 1036 else: 1037 raise ValueError("Algorithm [%s] is invalid." % algorithm)
    1038
    1039 - def generateTarfile(self, path, mode='tar', ignore=False, flat=False):
    1040 """ 1041 Creates a tar file containing the files in the list. 1042 1043 By default, this method will create uncompressed tar files. If you pass 1044 in mode C{'targz'}, then it will create gzipped tar files, and if you 1045 pass in mode C{'tarbz2'}, then it will create bzipped tar files. 1046 1047 The tar file will be created as a GNU tar archive, which enables extended 1048 file name lengths, etc. Since GNU tar is so prevalent, I've decided that 1049 the extra functionality out-weighs the disadvantage of not being 1050 "standard". 1051 1052 If you pass in C{flat=True}, then a "flat" archive will be created, and 1053 all of the files will be added to the root of the archive. So, the file 1054 C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. 1055 1056 By default, the whole method call fails if there are problems adding any 1057 of the files to the archive, resulting in an exception. Under these 1058 circumstances, callers are advised that they might want to call 1059 L{removeInvalid()} and then attempt to extract the tar file a second 1060 time, since the most common cause of failures is a missing file (a file 1061 that existed when the list was built, but is gone again by the time the 1062 tar file is built). 1063 1064 If you want to, you can pass in C{ignore=True}, and the method will 1065 ignore errors encountered when adding individual files to the archive 1066 (but not errors opening and closing the archive itself). 1067 1068 We'll always attempt to remove the tarfile from disk if an exception will 1069 be thrown. 1070 1071 @note: No validation is done as to whether the entries in the list are 1072 files, since only files or soft links should be in an object like this. 1073 However, to be safe, everything is explicitly added to the tar archive 1074 non-recursively so it's safe to include soft links to directories. 1075 1076 @note: The Python C{tarfile} module, which is used internally here, is 1077 supposed to deal properly with long filenames and links. In my testing, 1078 I have found that it appears to be able to add long really long filenames 1079 to archives, but doesn't do a good job reading them back out, even out of 1080 an archive it created. Fortunately, all Cedar Backup does is add files 1081 to archives. 1082 1083 @param path: Path of tar file to create on disk 1084 @type path: String representing a path on disk 1085 1086 @param mode: Tar creation mode 1087 @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} 1088 1089 @param ignore: Indicates whether to ignore certain errors. 1090 @type ignore: Boolean 1091 1092 @param flat: Creates "flat" archive by putting all items in root 1093 @type flat: Boolean 1094 1095 @raise ValueError: If mode is not valid 1096 @raise ValueError: If list is empty 1097 @raise ValueError: If the path could not be encoded properly. 1098 @raise TarError: If there is a problem creating the tar file 1099 """ 1100 # pylint: disable=E1101 1101 path = encodePath(path) 1102 if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") 1103 if(mode == 'tar'): tarmode = "w:" 1104 elif(mode == 'targz'): tarmode = "w:gz" 1105 elif(mode == 'tarbz2'): tarmode = "w:bz2" 1106 else: raise ValueError("Mode [%s] is not valid." % mode) 1107 try: 1108 tar = tarfile.open(path, tarmode) 1109 try: 1110 tar.format = tarfile.GNU_FORMAT 1111 except AttributeError: 1112 tar.posix = False 1113 for entry in self: 1114 try: 1115 if flat: 1116 tar.add(entry, arcname=os.path.basename(entry), recursive=False) 1117 else: 1118 tar.add(entry, recursive=False) 1119 except tarfile.TarError, e: 1120 if not ignore: 1121 raise e 1122 logger.info("Unable to add file [%s]; going on anyway." % entry) 1123 except OSError, e: 1124 if not ignore: 1125 raise tarfile.TarError(e) 1126 logger.info("Unable to add file [%s]; going on anyway." % entry) 1127 tar.close() 1128 except tarfile.ReadError, e: 1129 try: tar.close() 1130 except: pass 1131 if os.path.exists(path): 1132 try: os.remove(path) 1133 except: pass 1134 raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) 1135 except tarfile.TarError, e: 1136 try: tar.close() 1137 except: pass 1138 if os.path.exists(path): 1139 try: os.remove(path) 1140 except: pass 1141 raise e
    1142
    1143 - def removeUnchanged(self, digestMap, captureDigest=False):
    1144 """ 1145 Removes unchanged entries from the list. 1146 1147 This method relies on a digest map as returned from L{generateDigestMap}. 1148 For each entry in C{digestMap}, if the entry also exists in the current 1149 list I{and} the entry in the current list has the same digest value as in 1150 the map, the entry in the current list will be removed. 1151 1152 This method offers a convenient way for callers to filter unneeded 1153 entries from a list. The idea is that a caller will capture a digest map 1154 from C{generateDigestMap} at some point in time (perhaps the beginning of 1155 the week), and will save off that map using C{pickle} or some other 1156 method. Then, the caller could use this method sometime in the future to 1157 filter out any unchanged files based on the saved-off map. 1158 1159 If C{captureDigest} is passed-in as C{True}, then digest information will 1160 be captured for the entire list before the removal step occurs using the 1161 same rules as in L{generateDigestMap}. The check will involve a lookup 1162 into the complete digest map. 1163 1164 If C{captureDigest} is passed in as C{False}, we will only generate a 1165 digest value for files we actually need to check, and we'll ignore any 1166 entry in the list which isn't a file that currently exists on disk. 1167 1168 The return value varies depending on C{captureDigest}, as well. To 1169 preserve backwards compatibility, if C{captureDigest} is C{False}, then 1170 we'll just return a single value representing the number of entries 1171 removed. Otherwise, we'll return a tuple of C{(entries removed, digest 1172 map)}. The returned digest map will be in exactly the form returned by 1173 L{generateDigestMap}. 1174 1175 @note: For performance reasons, this method actually ends up rebuilding 1176 the list from scratch. First, we build a temporary dictionary containing 1177 all of the items from the original list. Then, we remove items as needed 1178 from the dictionary (which is faster than the equivalent operation on a 1179 list). Finally, we replace the contents of the current list based on the 1180 keys left in the dictionary. This should be transparent to the caller. 1181 1182 @param digestMap: Dictionary mapping file name to digest value. 1183 @type digestMap: Map as returned from L{generateDigestMap}. 1184 1185 @param captureDigest: Indicates that digest information should be captured. 1186 @type captureDigest: Boolean 1187 1188 @return: Number of entries removed 1189 """ 1190 if captureDigest: 1191 removed = 0 1192 table = {} 1193 captured = {} 1194 for entry in self: 1195 if os.path.isfile(entry) and not os.path.islink(entry): 1196 table[entry] = BackupFileList._generateDigest(entry) 1197 captured[entry] = table[entry] 1198 else: 1199 table[entry] = None 1200 for entry in digestMap.keys(): 1201 if table.has_key(entry): 1202 if table[entry] is not None: # equivalent to file/link check in other case 1203 digest = table[entry] 1204 if digest == digestMap[entry]: 1205 removed += 1 1206 del table[entry] 1207 logger.debug("Discarded unchanged file [%s]." % entry) 1208 self[:] = table.keys() 1209 return (removed, captured) 1210 else: 1211 removed = 0 1212 table = {} 1213 for entry in self: 1214 table[entry] = None 1215 for entry in digestMap.keys(): 1216 if table.has_key(entry): 1217 if os.path.isfile(entry) and not os.path.islink(entry): 1218 digest = BackupFileList._generateDigest(entry) 1219 if digest == digestMap[entry]: 1220 removed += 1 1221 del table[entry] 1222 logger.debug("Discarded unchanged file [%s]." % entry) 1223 self[:] = table.keys() 1224 return removed
    1225
    1226 1227 ######################################################################## 1228 # PurgeItemList class definition 1229 ######################################################################## 1230 1231 -class PurgeItemList(FilesystemList): # pylint: disable=R0904
    1232 1233 ###################### 1234 # Class documentation 1235 ###################### 1236 1237 """ 1238 List of files and directories to be purged. 1239 1240 A PurgeItemList is a L{FilesystemList} containing a list of files and 1241 directories to be purged. On top of the generic functionality provided by 1242 L{FilesystemList}, this class adds functionality to remove items that are 1243 too young to be purged, and to actually remove each item in the list from 1244 the filesystem. 1245 1246 The other main difference is that when you add a directory's contents to a 1247 purge item list, the directory itself is not added to the list. This way, 1248 if someone asks to purge within in C{/opt/backup/collect}, that directory 1249 doesn't get removed once all of the files within it is gone. 1250 """ 1251 1252 ############## 1253 # Constructor 1254 ############## 1255
    1256 - def __init__(self):
    1257 """Initializes a list with no configured exclusions.""" 1258 FilesystemList.__init__(self)
    1259 1260 1261 ############## 1262 # Add methods 1263 ############## 1264
    1265 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    1266 """ 1267 Adds the contents of a directory to the list. 1268 1269 The path must exist and must be a directory or a link to a directory. 1270 The contents of the directory (but I{not} the directory path itself) will 1271 be recursively added to the list, subject to any exclusions that are in 1272 place. If you only want the directory and its contents to be added, then 1273 pass in C{recursive=False}. 1274 1275 @note: If a directory's absolute path matches an exclude pattern or path, 1276 or if the directory contains the configured ignore file, then the 1277 directory and all of its contents will be recursively excluded from the 1278 list. 1279 1280 @note: If the passed-in directory happens to be a soft link, it will be 1281 recursed. However, the linkDepth parameter controls whether any soft 1282 links I{within} the directory will be recursed. The link depth is 1283 maximum depth of the tree at which soft links should be followed. So, a 1284 depth of 0 does not follow any soft links, a depth of 1 follows only 1285 links within the passed-in directory, a depth of 2 follows the links at 1286 the next level down, etc. 1287 1288 @note: Any invalid soft links (i.e. soft links that point to 1289 non-existent items) will be silently ignored. 1290 1291 @note: The L{excludeDirs} flag only controls whether any given soft link 1292 path itself is added to the list once it has been discovered. It does 1293 I{not} modify any behavior related to directory recursion. 1294 1295 @note: The L{excludeDirs} flag only controls whether any given directory 1296 path itself is added to the list once it has been discovered. It does 1297 I{not} modify any behavior related to directory recursion. 1298 1299 @note: If you call this method I{on a link to a directory} that link will 1300 never be dereferenced (it may, however, be followed). 1301 1302 @param path: Directory path whose contents should be added to the list 1303 @type path: String representing a path on disk 1304 1305 @param recursive: Indicates whether directory contents should be added recursively. 1306 @type recursive: Boolean value 1307 1308 @param addSelf: Ignored in this subclass. 1309 1310 @param linkDepth: Depth of soft links that should be followed 1311 @type linkDepth: Integer value, where zero means not to follow any soft links 1312 1313 @param dereference: Indicates whether soft links, if followed, should be dereferenced 1314 @type dereference: Boolean value 1315 1316 @return: Number of items recursively added to the list 1317 1318 @raise ValueError: If path is not a directory or does not exist. 1319 @raise ValueError: If the path could not be encoded properly. 1320 """ 1321 path = encodePath(path) 1322 path = normalizeDir(path) 1323 return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference)
    1324 1325 1326 ################## 1327 # Utility methods 1328 ################## 1329
    1330 - def removeYoungFiles(self, daysOld):
    1331 """ 1332 Removes from the list files younger than a certain age (in days). 1333 1334 Any file whose "age" in days is less than (C{<}) the value of the 1335 C{daysOld} parameter will be removed from the list so that it will not be 1336 purged later when L{purgeItems} is called. Directories and soft links 1337 will be ignored. 1338 1339 The "age" of a file is the amount of time since the file was last used, 1340 per the most recent of the file's C{st_atime} and C{st_mtime} values. 1341 1342 @note: Some people find the "sense" of this method confusing or 1343 "backwards". Keep in mind that this method is used to remove items 1344 I{from the list}, not from the filesystem! It removes from the list 1345 those items that you would I{not} want to purge because they are too 1346 young. As an example, passing in C{daysOld} of zero (0) would remove 1347 from the list no files, which would result in purging all of the files 1348 later. I would be happy to make a synonym of this method with an 1349 easier-to-understand "sense", if someone can suggest one. 1350 1351 @param daysOld: Minimum age of files that are to be kept in the list. 1352 @type daysOld: Integer value >= 0. 1353 1354 @return: Number of entries removed 1355 """ 1356 removed = 0 1357 daysOld = int(daysOld) 1358 if daysOld < 0: 1359 raise ValueError("Days old value must be an integer >= 0.") 1360 for entry in self[:]: 1361 if os.path.isfile(entry) and not os.path.islink(entry): 1362 try: 1363 ageInDays = calculateFileAge(entry) 1364 ageInWholeDays = math.floor(ageInDays) 1365 if ageInWholeDays < daysOld: 1366 removed += 1 1367 self.remove(entry) 1368 except OSError: 1369 pass 1370 return removed
    1371
    1372 - def purgeItems(self):
    1373 """ 1374 Purges all items in the list. 1375 1376 Every item in the list will be purged. Directories in the list will 1377 I{not} be purged recursively, and hence will only be removed if they are 1378 empty. Errors will be ignored. 1379 1380 To faciliate easy removal of directories that will end up being empty, 1381 the delete process happens in two passes: files first (including soft 1382 links), then directories. 1383 1384 @return: Tuple containing count of (files, dirs) removed 1385 """ 1386 files = 0 1387 dirs = 0 1388 for entry in self: 1389 if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): 1390 try: 1391 os.remove(entry) 1392 files += 1 1393 logger.debug("Purged file [%s]." % entry) 1394 except OSError: 1395 pass 1396 for entry in self: 1397 if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): 1398 try: 1399 os.rmdir(entry) 1400 dirs += 1 1401 logger.debug("Purged empty directory [%s]." % entry) 1402 except OSError: 1403 pass 1404 return (files, dirs)
    1405
    1406 1407 ######################################################################## 1408 # Public functions 1409 ######################################################################## 1410 1411 ########################## 1412 # normalizeDir() function 1413 ########################## 1414 1415 -def normalizeDir(path):
    1416 """ 1417 Normalizes a directory name. 1418 1419 For our purposes, a directory name is normalized by removing the trailing 1420 path separator, if any. This is important because we want directories to 1421 appear within lists in a consistent way, although from the user's 1422 perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. 1423 1424 @param path: Path to be normalized. 1425 @type path: String representing a path on disk 1426 1427 @return: Normalized path, which should be equivalent to the original. 1428 """ 1429 if path != os.sep and path[-1:] == os.sep: 1430 return path[:-1] 1431 return path
    1432
    1433 1434 ############################# 1435 # compareContents() function 1436 ############################# 1437 1438 -def compareContents(path1, path2, verbose=False):
    1439 """ 1440 Compares the contents of two directories to see if they are equivalent. 1441 1442 The two directories are recursively compared. First, we check whether they 1443 contain exactly the same set of files. Then, we check to see every given 1444 file has exactly the same contents in both directories. 1445 1446 This is all relatively simple to implement through the magic of 1447 L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix 1448 off the front of each entry in the mapping it generates. This makes our 1449 comparison as simple as creating a list for each path, then generating a 1450 digest map for each path and comparing the two. 1451 1452 If no exception is thrown, the two directories are considered identical. 1453 1454 If the C{verbose} flag is C{True}, then an alternate (but slower) method is 1455 used so that any thrown exception can indicate exactly which file caused the 1456 comparison to fail. The thrown C{ValueError} exception distinguishes 1457 between the directories containing different files, and containing the same 1458 files with differing content. 1459 1460 @note: Symlinks are I{not} followed for the purposes of this comparison. 1461 1462 @param path1: First path to compare. 1463 @type path1: String representing a path on disk 1464 1465 @param path2: First path to compare. 1466 @type path2: String representing a path on disk 1467 1468 @param verbose: Indicates whether a verbose response should be given. 1469 @type verbose: Boolean 1470 1471 @raise ValueError: If a directory doesn't exist or can't be read. 1472 @raise ValueError: If the two directories are not equivalent. 1473 @raise IOError: If there is an unusual problem reading the directories. 1474 """ 1475 try: 1476 path1List = BackupFileList() 1477 path1List.addDirContents(path1) 1478 path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) 1479 path2List = BackupFileList() 1480 path2List.addDirContents(path2) 1481 path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) 1482 compareDigestMaps(path1Digest, path2Digest, verbose) 1483 except IOError, e: 1484 logger.error("I/O error encountered during consistency check.") 1485 raise e
    1486
    1487 -def compareDigestMaps(digest1, digest2, verbose=False):
    1488 """ 1489 Compares two digest maps and throws an exception if they differ. 1490 1491 @param digest1: First digest to compare. 1492 @type digest1: Digest as returned from BackupFileList.generateDigestMap() 1493 1494 @param digest2: Second digest to compare. 1495 @type digest2: Digest as returned from BackupFileList.generateDigestMap() 1496 1497 @param verbose: Indicates whether a verbose response should be given. 1498 @type verbose: Boolean 1499 1500 @raise ValueError: If the two directories are not equivalent. 1501 """ 1502 if not verbose: 1503 if digest1 != digest2: 1504 raise ValueError("Consistency check failed.") 1505 else: 1506 list1 = UnorderedList(digest1.keys()) 1507 list2 = UnorderedList(digest2.keys()) 1508 if list1 != list2: 1509 raise ValueError("Directories contain a different set of files.") 1510 for key in list1: 1511 if digest1[key] != digest2[key]: 1512 raise ValueError("File contents for [%s] vary between directories." % key)
    1513

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem.FilesystemList-class.html0000664000175000017500000021636112143054363031645 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.FilesystemList
    Package CedarBackup2 :: Module filesystem :: Class FilesystemList
    [hide private]
    [frames] | no frames]

    Class FilesystemList

    source code

    object --+    
             |    
          list --+
                 |
                FilesystemList
    
    Known Subclasses:

    Represents a list of filesystem items.

    This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.).

    The custom methods such as addFile will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like append() or insert(). No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them.

    Once a list has been created, callers can remove individual items from the list using standard methods like pop() or remove() or they can use custom methods to remove specific types of entries or entries which match a particular pattern.


    Notes:
    • Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with ^ and end with $. This is true whether we are matching a complete path or a basename.
    • Some platforms, like Windows, do not support soft links. On those platforms, the ignore-soft-links flag can be set, but it won't do any good because the operating system never reports a file as a soft link.
    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addFile(self, path)
    Adds a file to the list.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeFiles(self, pattern=None)
    Removes file entries from the list.
    source code
     
    removeDirs(self, pattern=None)
    Removes directory entries from the list.
    source code
     
    removeLinks(self, pattern=None)
    Removes soft link entries from the list.
    source code
     
    removeMatch(self, pattern)
    Removes from the list all entries matching a pattern.
    source code
     
    removeInvalid(self)
    Removes from the list all entries that do not exist on disk.
    source code
     
    normalize(self)
    Normalizes the list, ensuring that each entry is unique.
    source code
     
    _setExcludeFiles(self, value)
    Property target used to set the exclude files flag.
    source code
     
    _getExcludeFiles(self)
    Property target used to get the exclude files flag.
    source code
     
    _setExcludeDirs(self, value)
    Property target used to set the exclude directories flag.
    source code
     
    _getExcludeDirs(self)
    Property target used to get the exclude directories flag.
    source code
     
    _setExcludeLinks(self, value)
    Property target used to set the exclude soft links flag.
    source code
     
    _getExcludeLinks(self)
    Property target used to get the exclude soft links flag.
    source code
     
    _setExcludePaths(self, value)
    Property target used to set the exclude paths list.
    source code
     
    _getExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setExcludeBasenamePatterns(self, value)
    Property target used to set the exclude basename patterns list.
    source code
     
    _getExcludeBasenamePatterns(self)
    Property target used to get the exclude basename patterns list.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)
    Internal implementation of addDirContents.
    source code
     
    verify(self)
    Verifies that all entries in the list exist on disk.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]
      excludeFiles
    Boolean indicating whether files should be excluded.
      excludeDirs
    Boolean indicating whether directories should be excluded.
      excludeLinks
    Boolean indicating whether soft links should be excluded.
      excludePaths
    List of absolute paths to be excluded.
      excludePatterns
    List of regular expression patterns (matching complete path) to be excluded.
      excludeBasenamePatterns
    List of regular expression patterns (matching basename) to be excluded.
      ignoreFile
    Name of file which will cause directory contents to be ignored.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addFile(self, path)

    source code 

    Adds a file to the list.

    The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place.

    Parameters:
    • path (String representing a path on disk) - File path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a file or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDir(self, path)

    source code 

    Adds a directory to the list.

    The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The ignoreFile does not apply to this method, only to addDirContents.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf (Boolean value) - Indicates whether the directory itself should be added to the list.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Maximum depth of the tree at which soft links should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeFiles(self, pattern=None)

    source code 

    Removes file entries from the list.

    If pattern is not passed in or is None, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting excludeFiles to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeDirs(self, pattern=None)

    source code 

    Removes directory entries from the list.

    If pattern is not passed in or is None, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting excludeDirs to True before adding items to the list (note that this will not prevent you from recursively adding the contents of directories).

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeLinks(self, pattern=None)

    source code 

    Removes soft link entries from the list.

    If pattern is not passed in or is None, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting excludeLinks to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeMatch(self, pattern)

    source code 

    Removes from the list all entries matching a pattern.

    This method removes from the list all entries which match the passed in pattern. Since there is no need to check the type of each entry, it is faster to call this method than to call the removeFiles, removeDirs or removeLinks methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting excludePatterns or excludeBasenamePatterns before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed.
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    Note: Unlike when using the exclude lists, the pattern here is not bounded at the front and the back of the string. You can use any pattern you want.

    removeInvalid(self)

    source code 

    Removes from the list all entries that do not exist on disk.

    This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories.

    Returns:
    Number of entries removed.

    _setExcludeFiles(self, value)

    source code 

    Property target used to set the exclude files flag. No validations, but we normalize the value to True or False.

    _setExcludeDirs(self, value)

    source code 

    Property target used to set the exclude directories flag. No validations, but we normalize the value to True or False.

    _setExcludeLinks(self, value)

    source code 

    Property target used to set the exclude soft links flag. No validations, but we normalize the value to True or False.

    _setExcludePaths(self, value)

    source code 

    Property target used to set the exclude paths list. A None value is converted to an empty list. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If any list element is not an absolute path.

    _setExcludePatterns(self, value)

    source code 

    Property target used to set the exclude patterns list. A None value is converted to an empty list.

    _setExcludeBasenamePatterns(self, value)

    source code 

    Property target used to set the exclude basename patterns list. A None value is converted to an empty list.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)

    source code 

    Internal implementation of addDirContents.

    This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard FilesystemList behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, addDirContents ends up being wholly implemented in terms of this method.

    The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a constant depth starting from the top-most directory.

    There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name).

    Parameters:
    • path - Directory path whose contents should be added to the list.
    • includePath - Indicates whether to include the path as well as contents.
    • recursive - Indicates whether directory contents should be added recursively.
    • linkDepth - Depth of soft links that should be followed
    • dereference - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.

    Note: If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    verify(self)

    source code 

    Verifies that all entries in the list exist on disk.

    Returns:
    True if all entries exist, False otherwise.

    Property Details [hide private]

    excludeFiles

    Boolean indicating whether files should be excluded.

    Get Method:
    _getExcludeFiles(self) - Property target used to get the exclude files flag.
    Set Method:
    _setExcludeFiles(self, value) - Property target used to set the exclude files flag.

    excludeDirs

    Boolean indicating whether directories should be excluded.

    Get Method:
    _getExcludeDirs(self) - Property target used to get the exclude directories flag.
    Set Method:
    _setExcludeDirs(self, value) - Property target used to set the exclude directories flag.

    excludeLinks

    Boolean indicating whether soft links should be excluded.

    Get Method:
    _getExcludeLinks(self) - Property target used to get the exclude soft links flag.
    Set Method:
    _setExcludeLinks(self, value) - Property target used to set the exclude soft links flag.

    excludePaths

    List of absolute paths to be excluded.

    Get Method:
    _getExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setExcludePaths(self, value) - Property target used to set the exclude paths list.

    excludePatterns

    List of regular expression patterns (matching complete path) to be excluded.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    excludeBasenamePatterns

    List of regular expression patterns (matching basename) to be excluded.

    Get Method:
    _getExcludeBasenamePatterns(self) - Property target used to get the exclude basename patterns list.
    Set Method:
    _setExcludeBasenamePatterns(self, value) - Property target used to set the exclude basename patterns list.

    ignoreFile

    Name of file which will cause directory contents to be ignored.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.validate-module.html0000664000175000017500000000506712143054362030673 0ustar pronovicpronovic00000000000000 validate

    Module validate


    Functions

    executeValidate

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ExtendedAction-class.html0000664000175000017500000010741012143054362030635 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ExtendedAction
    Package CedarBackup2 :: Module config :: Class ExtendedAction
    [hide private]
    [frames] | no frames]

    Class ExtendedAction

    source code

    object --+
             |
            ExtendedAction
    

    Class representing an extended action.

    Essentially, an extended action needs to allow the following to happen:

      exec("from %s import %s" % (module, function))
      exec("%s(action, configPath")" % function)
    

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The module must be a non-empty string and a valid Python identifier.
    • The function must be an on-empty string and a valid Python identifier.
    • If set, the index must be a positive integer.
    • If set, the dependencies attribute must be an ActionDependencies object.
    Instance Methods [hide private]
     
    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    Constructor for the ExtendedAction class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the action name.
    source code
     
    _getName(self)
    Property target used to get the action name.
    source code
     
    _setModule(self, value)
    Property target used to set the module name.
    source code
     
    _getModule(self)
    Property target used to get the module name.
    source code
     
    _setFunction(self, value)
    Property target used to set the function name.
    source code
     
    _getFunction(self)
    Property target used to get the function name.
    source code
     
    _setIndex(self, value)
    Property target used to set the action index.
    source code
     
    _getIndex(self)
    Property target used to get the action index.
    source code
     
    _setDependencies(self, value)
    Property target used to set the action dependencies information.
    source code
     
    _getDependencies(self)
    Property target used to get action dependencies information.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the extended action.
      module
    Name of the module containing the extended action function.
      function
    Name of the extended action function.
      index
    Index of action, used for execution ordering.
      dependencies
    Dependencies for action, used for execution ordering.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    (Constructor)

    source code 

    Constructor for the ExtendedAction class.

    Parameters:
    • name - Name of the extended action
    • module - Name of the module containing the extended action function
    • function - Name of the extended action function
    • index - Index of action, used for execution ordering
    • dependencies - Dependencies for action, used for execution ordering
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setModule(self, value)

    source code 

    Property target used to set the module name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setFunction(self, value)

    source code 

    Property target used to set the function name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setIndex(self, value)

    source code 

    Property target used to set the action index. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDependencies(self, value)

    source code 

    Property target used to set the action dependencies information. If not None, the value must be a ActionDependecies object.

    Raises:
    • ValueError - If the value is not a ActionDependencies object.

    Property Details [hide private]

    name

    Name of the extended action.

    Get Method:
    _getName(self) - Property target used to get the action name.
    Set Method:
    _setName(self, value) - Property target used to set the action name.

    module

    Name of the module containing the extended action function.

    Get Method:
    _getModule(self) - Property target used to get the module name.
    Set Method:
    _setModule(self, value) - Property target used to set the module name.

    function

    Name of the extended action function.

    Get Method:
    _getFunction(self) - Property target used to get the function name.
    Set Method:
    _setFunction(self, value) - Property target used to set the function name.

    index

    Index of action, used for execution ordering.

    Get Method:
    _getIndex(self) - Property target used to get the action index.
    Set Method:
    _setIndex(self, value) - Property target used to set the action index.

    dependencies

    Dependencies for action, used for execution ordering.

    Get Method:
    _getDependencies(self) - Property target used to get action dependencies information.
    Set Method:
    _setDependencies(self, value) - Property target used to set the action dependencies information.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.UnorderedList-class.html0000664000175000017500000005333512143054363030241 0ustar pronovicpronovic00000000000000 CedarBackup2.util.UnorderedList
    Package CedarBackup2 :: Module util :: Class UnorderedList
    [hide private]
    [frames] | no frames]

    Class UnorderedList

    source code

    object --+    
             |    
          list --+
                 |
                UnorderedList
    
    Known Subclasses:

    Class representing an "unordered list".

    An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list.

    For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order.

    I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future.

    We override the original __eq__, __ne__, __ge__, __gt__, __le__ and __lt__ list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation but instead comparing sorted lists. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists.

    Instance Methods [hide private]
     
    __eq__(self, other)
    Definition of == operator for this class.
    source code
     
    __ne__(self, other)
    Definition of != operator for this class.
    source code
     
    __ge__(self, other)
    Definition of ≥ operator for this class.
    source code
     
    __gt__(self, other)
    Definition of > operator for this class.
    source code
     
    __le__(self, other)
    Definition of ≤ operator for this class.
    source code
     
    __lt__(self, other)
    Definition of < operator for this class.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __eq__(self, other)
    (Equality operator)

    source code 

    Definition of == operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self == other.
    Overrides: list.__eq__

    __ne__(self, other)

    source code 

    Definition of != operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self != other.
    Overrides: list.__ne__

    __ge__(self, other)
    (Greater-than-or-equals operator)

    source code 

    Definition of ≥ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self >= other.
    Overrides: list.__ge__

    __gt__(self, other)
    (Greater-than operator)

    source code 

    Definition of > operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self > other.
    Overrides: list.__gt__

    __le__(self, other)
    (Less-than-or-equals operator)

    source code 

    Definition of ≤ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self <= other.
    Overrides: list.__le__

    __lt__(self, other)
    (Less-than operator)

    source code 

    Definition of < operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self < other.
    Overrides: list.__lt__

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.capacity.PercentageQuantity-class.html0000664000175000017500000005506712143054363033404 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.PercentageQuantity
    Package CedarBackup2 :: Package extend :: Module capacity :: Class PercentageQuantity
    [hide private]
    [frames] | no frames]

    Class PercentageQuantity

    source code

    object --+
             |
            PercentageQuantity
    

    Class representing a percentage quantity.

    The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None)
    Constructor for the PercentageQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be a non-empty string if it is not None.
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _getPercentage(self)
    Property target used to get the quantity as a floating point number.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Percentage value, as a string
      percentage
    Percentage value, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None)
    (Constructor)

    source code 

    Constructor for the PercentageQuantity class.

    Parameters:
    • quantity - Percentage quantity, as a string (i.e. "99.9" or "12")
    Raises:
    • ValueError - If the quantity value is invaid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _getPercentage(self)

    source code 

    Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Percentage value, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be a non-empty string if it is not None.

    percentage

    Percentage value, as a floating point number.

    Get Method:
    _getPercentage(self) - Property target used to get the quantity as a floating point number.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion-pysrc.html0000664000175000017500000214727612143054364030235 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion
    Package CedarBackup2 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.subversion

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2005,2007,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Revision : $Id: subversion.py 1006 2010-07-07 21:03:57Z pronovic $ 
      31  # Purpose  : Provides an extension to back up Subversion repositories. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides an extension to back up Subversion repositories. 
      41   
      42  This is a Cedar Backup extension used to back up Subversion repositories via 
      43  the Cedar Backup command line.  Each Subversion repository can be backed using 
      44  the same collect modes allowed for filesystems in the standard Cedar Backup 
      45  collect action: weekly, daily, incremental.   
      46   
      47  This extension requires a new configuration section <subversion> and is 
      48  intended to be run either immediately before or immediately after the standard 
      49  collect action.  Aside from its own configuration, it requires the options and 
      50  collect configuration sections in the standard Cedar Backup configuration file. 
      51   
      52  There are two different kinds of Subversion repositories at this writing: BDB 
      53  (Berkeley Database) and FSFS (a "filesystem within a filesystem").  Although 
      54  the repository type can be specified in configuration, that information is just 
      55  kept around for reference.  It doesn't affect the backup.  Both kinds of 
      56  repositories are backed up in the same way, using C{svnadmin dump} in an 
      57  incremental mode. 
      58   
      59  It turns out that FSFS repositories can also be backed up just like any 
      60  other filesystem directory.  If you would rather do that, then use the normal 
      61  collect action.  This is probably simpler, although it carries its own  
      62  advantages and disadvantages (plus you will have to be careful to exclude 
      63  the working directories Subversion uses when building an update to commit). 
      64  Check the Subversion documentation for more information. 
      65    
      66  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      67  """ 
      68   
      69  ######################################################################## 
      70  # Imported modules 
      71  ######################################################################## 
      72   
      73  # System modules 
      74  import os 
      75  import logging 
      76  import pickle 
      77  from bz2 import BZ2File 
      78  from gzip import GzipFile 
      79   
      80  # Cedar Backup modules 
      81  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
      82  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
      83  from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
      84  from CedarBackup2.filesystem import FilesystemList 
      85  from CedarBackup2.util import UnorderedList, RegexList 
      86  from CedarBackup2.util import isStartOfWeek, buildNormalizedPath 
      87  from CedarBackup2.util import resolveCommand, executeCommand 
      88  from CedarBackup2.util import ObjectTypeList, encodePath, changeOwnership 
      89   
      90   
      91  ######################################################################## 
      92  # Module-wide constants and variables 
      93  ######################################################################## 
      94   
      95  logger = logging.getLogger("CedarBackup2.log.extend.subversion") 
      96   
      97  SVNLOOK_COMMAND      = [ "svnlook", ] 
      98  SVNADMIN_COMMAND     = [ "svnadmin", ] 
      99   
     100  REVISION_PATH_EXTENSION = "svnlast" 
    
    101 102 103 ######################################################################## 104 # RepositoryDir class definition 105 ######################################################################## 106 107 -class RepositoryDir(object):
    108 109 """ 110 Class representing Subversion repository directory. 111 112 A repository directory is a directory that contains one or more Subversion 113 repositories. 114 115 The following restrictions exist on data in this class: 116 117 - The directory path must be absolute. 118 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 119 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 120 121 The repository type value is kept around just for reference. It doesn't 122 affect the behavior of the backup. 123 124 Relative exclusions are allowed here. However, there is no configured 125 ignore file, because repository dir backups are not recursive. 126 127 @sort: __init__, __repr__, __str__, __cmp__, directoryPath, collectMode, compressMode 128 """ 129
    130 - def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, 131 relativeExcludePaths=None, excludePatterns=None):
    132 """ 133 Constructor for the C{RepositoryDir} class. 134 135 @param repositoryType: Type of repository, for reference 136 @param directoryPath: Absolute path of the Subversion parent directory 137 @param collectMode: Overridden collect mode for this directory. 138 @param compressMode: Overridden compression mode for this directory. 139 @param relativeExcludePaths: List of relative paths to exclude. 140 @param excludePatterns: List of regular expression patterns to exclude 141 """ 142 self._repositoryType = None 143 self._directoryPath = None 144 self._collectMode = None 145 self._compressMode = None 146 self._relativeExcludePaths = None 147 self._excludePatterns = None 148 self.repositoryType = repositoryType 149 self.directoryPath = directoryPath 150 self.collectMode = collectMode 151 self.compressMode = compressMode 152 self.relativeExcludePaths = relativeExcludePaths 153 self.excludePatterns = excludePatterns
    154
    155 - def __repr__(self):
    156 """ 157 Official string representation for class instance. 158 """ 159 return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, 160 self.compressMode, self.relativeExcludePaths, self.excludePatterns)
    161
    162 - def __str__(self):
    163 """ 164 Informal string representation for class instance. 165 """ 166 return self.__repr__()
    167
    168 - def __cmp__(self, other):
    169 """ 170 Definition of equals operator for this class. 171 @param other: Other object to compare to. 172 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 173 """ 174 if other is None: 175 return 1 176 if self.repositoryType != other.repositoryType: 177 if self.repositoryType < other.repositoryType: 178 return -1 179 else: 180 return 1 181 if self.directoryPath != other.directoryPath: 182 if self.directoryPath < other.directoryPath: 183 return -1 184 else: 185 return 1 186 if self.collectMode != other.collectMode: 187 if self.collectMode < other.collectMode: 188 return -1 189 else: 190 return 1 191 if self.compressMode != other.compressMode: 192 if self.compressMode < other.compressMode: 193 return -1 194 else: 195 return 1 196 if self.relativeExcludePaths != other.relativeExcludePaths: 197 if self.relativeExcludePaths < other.relativeExcludePaths: 198 return -1 199 else: 200 return 1 201 if self.excludePatterns != other.excludePatterns: 202 if self.excludePatterns < other.excludePatterns: 203 return -1 204 else: 205 return 1 206 return 0
    207
    208 - def _setRepositoryType(self, value):
    209 """ 210 Property target used to set the repository type. 211 There is no validation; this value is kept around just for reference. 212 """ 213 self._repositoryType = value
    214
    215 - def _getRepositoryType(self):
    216 """ 217 Property target used to get the repository type. 218 """ 219 return self._repositoryType
    220
    221 - def _setDirectoryPath(self, value):
    222 """ 223 Property target used to set the directory path. 224 The value must be an absolute path if it is not C{None}. 225 It does not have to exist on disk at the time of assignment. 226 @raise ValueError: If the value is not an absolute path. 227 @raise ValueError: If the value cannot be encoded properly. 228 """ 229 if value is not None: 230 if not os.path.isabs(value): 231 raise ValueError("Repository path must be an absolute path.") 232 self._directoryPath = encodePath(value)
    233
    234 - def _getDirectoryPath(self):
    235 """ 236 Property target used to get the repository path. 237 """ 238 return self._directoryPath
    239
    240 - def _setCollectMode(self, value):
    241 """ 242 Property target used to set the collect mode. 243 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 244 @raise ValueError: If the value is not valid. 245 """ 246 if value is not None: 247 if value not in VALID_COLLECT_MODES: 248 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 249 self._collectMode = value
    250
    251 - def _getCollectMode(self):
    252 """ 253 Property target used to get the collect mode. 254 """ 255 return self._collectMode
    256
    257 - def _setCompressMode(self, value):
    258 """ 259 Property target used to set the compress mode. 260 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 261 @raise ValueError: If the value is not valid. 262 """ 263 if value is not None: 264 if value not in VALID_COMPRESS_MODES: 265 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 266 self._compressMode = value
    267
    268 - def _getCompressMode(self):
    269 """ 270 Property target used to get the compress mode. 271 """ 272 return self._compressMode
    273
    274 - def _setRelativeExcludePaths(self, value):
    275 """ 276 Property target used to set the relative exclude paths list. 277 Elements do not have to exist on disk at the time of assignment. 278 """ 279 if value is None: 280 self._relativeExcludePaths = None 281 else: 282 try: 283 saved = self._relativeExcludePaths 284 self._relativeExcludePaths = UnorderedList() 285 self._relativeExcludePaths.extend(value) 286 except Exception, e: 287 self._relativeExcludePaths = saved 288 raise e
    289
    290 - def _getRelativeExcludePaths(self):
    291 """ 292 Property target used to get the relative exclude paths list. 293 """ 294 return self._relativeExcludePaths
    295
    296 - def _setExcludePatterns(self, value):
    297 """ 298 Property target used to set the exclude patterns list. 299 """ 300 if value is None: 301 self._excludePatterns = None 302 else: 303 try: 304 saved = self._excludePatterns 305 self._excludePatterns = RegexList() 306 self._excludePatterns.extend(value) 307 except Exception, e: 308 self._excludePatterns = saved 309 raise e
    310
    311 - def _getExcludePatterns(self):
    312 """ 313 Property target used to get the exclude patterns list. 314 """ 315 return self._excludePatterns
    316 317 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 318 directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") 319 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 320 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") 321 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 322 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    323
    324 325 ######################################################################## 326 # Repository class definition 327 ######################################################################## 328 329 -class Repository(object):
    330 331 """ 332 Class representing generic Subversion repository configuration.. 333 334 The following restrictions exist on data in this class: 335 336 - The respository path must be absolute. 337 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 338 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 339 340 The repository type value is kept around just for reference. It doesn't 341 affect the behavior of the backup. 342 343 @sort: __init__, __repr__, __str__, __cmp__, repositoryPath, collectMode, compressMode 344 """ 345
    346 - def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None):
    347 """ 348 Constructor for the C{Repository} class. 349 350 @param repositoryType: Type of repository, for reference 351 @param repositoryPath: Absolute path to a Subversion repository on disk. 352 @param collectMode: Overridden collect mode for this directory. 353 @param compressMode: Overridden compression mode for this directory. 354 """ 355 self._repositoryType = None 356 self._repositoryPath = None 357 self._collectMode = None 358 self._compressMode = None 359 self.repositoryType = repositoryType 360 self.repositoryPath = repositoryPath 361 self.collectMode = collectMode 362 self.compressMode = compressMode
    363
    364 - def __repr__(self):
    365 """ 366 Official string representation for class instance. 367 """ 368 return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode)
    369
    370 - def __str__(self):
    371 """ 372 Informal string representation for class instance. 373 """ 374 return self.__repr__()
    375
    376 - def __cmp__(self, other):
    377 """ 378 Definition of equals operator for this class. 379 @param other: Other object to compare to. 380 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 381 """ 382 if other is None: 383 return 1 384 if self.repositoryType != other.repositoryType: 385 if self.repositoryType < other.repositoryType: 386 return -1 387 else: 388 return 1 389 if self.repositoryPath != other.repositoryPath: 390 if self.repositoryPath < other.repositoryPath: 391 return -1 392 else: 393 return 1 394 if self.collectMode != other.collectMode: 395 if self.collectMode < other.collectMode: 396 return -1 397 else: 398 return 1 399 if self.compressMode != other.compressMode: 400 if self.compressMode < other.compressMode: 401 return -1 402 else: 403 return 1 404 return 0
    405
    406 - def _setRepositoryType(self, value):
    407 """ 408 Property target used to set the repository type. 409 There is no validation; this value is kept around just for reference. 410 """ 411 self._repositoryType = value
    412
    413 - def _getRepositoryType(self):
    414 """ 415 Property target used to get the repository type. 416 """ 417 return self._repositoryType
    418
    419 - def _setRepositoryPath(self, value):
    420 """ 421 Property target used to set the repository path. 422 The value must be an absolute path if it is not C{None}. 423 It does not have to exist on disk at the time of assignment. 424 @raise ValueError: If the value is not an absolute path. 425 @raise ValueError: If the value cannot be encoded properly. 426 """ 427 if value is not None: 428 if not os.path.isabs(value): 429 raise ValueError("Repository path must be an absolute path.") 430 self._repositoryPath = encodePath(value)
    431
    432 - def _getRepositoryPath(self):
    433 """ 434 Property target used to get the repository path. 435 """ 436 return self._repositoryPath
    437
    438 - def _setCollectMode(self, value):
    439 """ 440 Property target used to set the collect mode. 441 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 442 @raise ValueError: If the value is not valid. 443 """ 444 if value is not None: 445 if value not in VALID_COLLECT_MODES: 446 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 447 self._collectMode = value
    448
    449 - def _getCollectMode(self):
    450 """ 451 Property target used to get the collect mode. 452 """ 453 return self._collectMode
    454
    455 - def _setCompressMode(self, value):
    456 """ 457 Property target used to set the compress mode. 458 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 459 @raise ValueError: If the value is not valid. 460 """ 461 if value is not None: 462 if value not in VALID_COMPRESS_MODES: 463 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 464 self._compressMode = value
    465
    466 - def _getCompressMode(self):
    467 """ 468 Property target used to get the compress mode. 469 """ 470 return self._compressMode
    471 472 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 473 repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") 474 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 475 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.")
    476
    477 478 ######################################################################## 479 # SubversionConfig class definition 480 ######################################################################## 481 482 -class SubversionConfig(object):
    483 484 """ 485 Class representing Subversion configuration. 486 487 Subversion configuration is used for backing up Subversion repositories. 488 489 The following restrictions exist on data in this class: 490 491 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 492 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 493 - The repositories list must be a list of C{Repository} objects. 494 - The repositoryDirs list must be a list of C{RepositoryDir} objects. 495 496 For the two lists, validation is accomplished through the 497 L{util.ObjectTypeList} list implementation that overrides common list 498 methods and transparently ensures that each element has the correct type. 499 500 @note: Lists within this class are "unordered" for equality comparisons. 501 502 @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, repositories 503 """ 504
    505 - def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None):
    506 """ 507 Constructor for the C{SubversionConfig} class. 508 509 @param collectMode: Default collect mode. 510 @param compressMode: Default compress mode. 511 @param repositories: List of Subversion repositories to back up. 512 @param repositoryDirs: List of Subversion parent directories to back up. 513 514 @raise ValueError: If one of the values is invalid. 515 """ 516 self._collectMode = None 517 self._compressMode = None 518 self._repositories = None 519 self._repositoryDirs = None 520 self.collectMode = collectMode 521 self.compressMode = compressMode 522 self.repositories = repositories 523 self.repositoryDirs = repositoryDirs
    524
    525 - def __repr__(self):
    526 """ 527 Official string representation for class instance. 528 """ 529 return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs)
    530
    531 - def __str__(self):
    532 """ 533 Informal string representation for class instance. 534 """ 535 return self.__repr__()
    536
    537 - def __cmp__(self, other):
    538 """ 539 Definition of equals operator for this class. 540 Lists within this class are "unordered" for equality comparisons. 541 @param other: Other object to compare to. 542 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 543 """ 544 if other is None: 545 return 1 546 if self.collectMode != other.collectMode: 547 if self.collectMode < other.collectMode: 548 return -1 549 else: 550 return 1 551 if self.compressMode != other.compressMode: 552 if self.compressMode < other.compressMode: 553 return -1 554 else: 555 return 1 556 if self.repositories != other.repositories: 557 if self.repositories < other.repositories: 558 return -1 559 else: 560 return 1 561 if self.repositoryDirs != other.repositoryDirs: 562 if self.repositoryDirs < other.repositoryDirs: 563 return -1 564 else: 565 return 1 566 return 0
    567
    568 - def _setCollectMode(self, value):
    569 """ 570 Property target used to set the collect mode. 571 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 572 @raise ValueError: If the value is not valid. 573 """ 574 if value is not None: 575 if value not in VALID_COLLECT_MODES: 576 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 577 self._collectMode = value
    578
    579 - def _getCollectMode(self):
    580 """ 581 Property target used to get the collect mode. 582 """ 583 return self._collectMode
    584
    585 - def _setCompressMode(self, value):
    586 """ 587 Property target used to set the compress mode. 588 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 589 @raise ValueError: If the value is not valid. 590 """ 591 if value is not None: 592 if value not in VALID_COMPRESS_MODES: 593 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 594 self._compressMode = value
    595
    596 - def _getCompressMode(self):
    597 """ 598 Property target used to get the compress mode. 599 """ 600 return self._compressMode
    601
    602 - def _setRepositories(self, value):
    603 """ 604 Property target used to set the repositories list. 605 Either the value must be C{None} or each element must be a C{Repository}. 606 @raise ValueError: If the value is not a C{Repository} 607 """ 608 if value is None: 609 self._repositories = None 610 else: 611 try: 612 saved = self._repositories 613 self._repositories = ObjectTypeList(Repository, "Repository") 614 self._repositories.extend(value) 615 except Exception, e: 616 self._repositories = saved 617 raise e
    618
    619 - def _getRepositories(self):
    620 """ 621 Property target used to get the repositories list. 622 """ 623 return self._repositories
    624
    625 - def _setRepositoryDirs(self, value):
    626 """ 627 Property target used to set the repositoryDirs list. 628 Either the value must be C{None} or each element must be a C{Repository}. 629 @raise ValueError: If the value is not a C{Repository} 630 """ 631 if value is None: 632 self._repositoryDirs = None 633 else: 634 try: 635 saved = self._repositoryDirs 636 self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") 637 self._repositoryDirs.extend(value) 638 except Exception, e: 639 self._repositoryDirs = saved 640 raise e
    641
    642 - def _getRepositoryDirs(self):
    643 """ 644 Property target used to get the repositoryDirs list. 645 """ 646 return self._repositoryDirs
    647 648 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 649 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 650 repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") 651 repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.")
    652
    653 654 ######################################################################## 655 # LocalConfig class definition 656 ######################################################################## 657 658 -class LocalConfig(object):
    659 660 """ 661 Class representing this extension's configuration document. 662 663 This is not a general-purpose configuration object like the main Cedar 664 Backup configuration object. Instead, it just knows how to parse and emit 665 Subversion-specific configuration values. Third parties who need to read 666 and write configuration related to this extension should access it through 667 the constructor, C{validate} and C{addConfig} methods. 668 669 @note: Lists within this class are "unordered" for equality comparisons. 670 671 @sort: __init__, __repr__, __str__, __cmp__, subversion, validate, addConfig 672 """ 673
    674 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    675 """ 676 Initializes a configuration object. 677 678 If you initialize the object without passing either C{xmlData} or 679 C{xmlPath} then configuration will be empty and will be invalid until it 680 is filled in properly. 681 682 No reference to the original XML data or original path is saved off by 683 this class. Once the data has been parsed (successfully or not) this 684 original information is discarded. 685 686 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 687 method will be called (with its default arguments) against configuration 688 after successfully parsing any passed-in XML. Keep in mind that even if 689 C{validate} is C{False}, it might not be possible to parse the passed-in 690 XML document if lower-level validations fail. 691 692 @note: It is strongly suggested that the C{validate} option always be set 693 to C{True} (the default) unless there is a specific need to read in 694 invalid configuration from disk. 695 696 @param xmlData: XML data representing configuration. 697 @type xmlData: String data. 698 699 @param xmlPath: Path to an XML file on disk. 700 @type xmlPath: Absolute path to a file on disk. 701 702 @param validate: Validate the document after parsing it. 703 @type validate: Boolean true/false. 704 705 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 706 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 707 @raise ValueError: If the parsed configuration document is not valid. 708 """ 709 self._subversion = None 710 self.subversion = None 711 if xmlData is not None and xmlPath is not None: 712 raise ValueError("Use either xmlData or xmlPath, but not both.") 713 if xmlData is not None: 714 self._parseXmlData(xmlData) 715 if validate: 716 self.validate() 717 elif xmlPath is not None: 718 xmlData = open(xmlPath).read() 719 self._parseXmlData(xmlData) 720 if validate: 721 self.validate()
    722
    723 - def __repr__(self):
    724 """ 725 Official string representation for class instance. 726 """ 727 return "LocalConfig(%s)" % (self.subversion)
    728
    729 - def __str__(self):
    730 """ 731 Informal string representation for class instance. 732 """ 733 return self.__repr__()
    734
    735 - def __cmp__(self, other):
    736 """ 737 Definition of equals operator for this class. 738 Lists within this class are "unordered" for equality comparisons. 739 @param other: Other object to compare to. 740 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 741 """ 742 if other is None: 743 return 1 744 if self.subversion != other.subversion: 745 if self.subversion < other.subversion: 746 return -1 747 else: 748 return 1 749 return 0
    750
    751 - def _setSubversion(self, value):
    752 """ 753 Property target used to set the subversion configuration value. 754 If not C{None}, the value must be a C{SubversionConfig} object. 755 @raise ValueError: If the value is not a C{SubversionConfig} 756 """ 757 if value is None: 758 self._subversion = None 759 else: 760 if not isinstance(value, SubversionConfig): 761 raise ValueError("Value must be a C{SubversionConfig} object.") 762 self._subversion = value
    763
    764 - def _getSubversion(self):
    765 """ 766 Property target used to get the subversion configuration value. 767 """ 768 return self._subversion
    769 770 subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") 771
    772 - def validate(self):
    773 """ 774 Validates configuration represented by the object. 775 776 Subversion configuration must be filled in. Within that, the collect 777 mode and compress mode are both optional, but the list of repositories 778 must contain at least one entry. 779 780 Each repository must contain a repository path, and then must be either 781 able to take collect mode and compress mode configuration from the parent 782 C{SubversionConfig} object, or must set each value on its own. 783 784 @raise ValueError: If one of the validations fails. 785 """ 786 if self.subversion is None: 787 raise ValueError("Subversion section is required.") 788 if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and 789 (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): 790 raise ValueError("At least one Subversion repository must be configured.") 791 if self.subversion.repositories is not None: 792 for repository in self.subversion.repositories: 793 if repository.repositoryPath is None: 794 raise ValueError("Each repository must set a repository path.") 795 if self.subversion.collectMode is None and repository.collectMode is None: 796 raise ValueError("Collect mode must either be set in parent section or individual repository.") 797 if self.subversion.compressMode is None and repository.compressMode is None: 798 raise ValueError("Compress mode must either be set in parent section or individual repository.") 799 if self.subversion.repositoryDirs is not None: 800 for repositoryDir in self.subversion.repositoryDirs: 801 if repositoryDir.directoryPath is None: 802 raise ValueError("Each repository directory must set a directory path.") 803 if self.subversion.collectMode is None and repositoryDir.collectMode is None: 804 raise ValueError("Collect mode must either be set in parent section or repository directory.") 805 if self.subversion.compressMode is None and repositoryDir.compressMode is None: 806 raise ValueError("Compress mode must either be set in parent section or repository directory.")
    807
    808 - def addConfig(self, xmlDom, parentNode):
    809 """ 810 Adds a <subversion> configuration section as the next child of a parent. 811 812 Third parties should use this function to write configuration related to 813 this extension. 814 815 We add the following fields to the document:: 816 817 collectMode //cb_config/subversion/collectMode 818 compressMode //cb_config/subversion/compressMode 819 820 We also add groups of the following items, one list element per 821 item:: 822 823 repository //cb_config/subversion/repository 824 repository_dir //cb_config/subversion/repository_dir 825 826 @param xmlDom: DOM tree as from C{impl.createDocument()}. 827 @param parentNode: Parent that the section should be appended to. 828 """ 829 if self.subversion is not None: 830 sectionNode = addContainerNode(xmlDom, parentNode, "subversion") 831 addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) 832 addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) 833 if self.subversion.repositories is not None: 834 for repository in self.subversion.repositories: 835 LocalConfig._addRepository(xmlDom, sectionNode, repository) 836 if self.subversion.repositoryDirs is not None: 837 for repositoryDir in self.subversion.repositoryDirs: 838 LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir)
    839
    840 - def _parseXmlData(self, xmlData):
    841 """ 842 Internal method to parse an XML string into the object. 843 844 This method parses the XML document into a DOM tree (C{xmlDom}) and then 845 calls a static method to parse the subversion configuration section. 846 847 @param xmlData: XML data to be parsed 848 @type xmlData: String data 849 850 @raise ValueError: If the XML cannot be successfully parsed. 851 """ 852 (xmlDom, parentNode) = createInputDom(xmlData) 853 self._subversion = LocalConfig._parseSubversion(parentNode)
    854 855 @staticmethod
    856 - def _parseSubversion(parent):
    857 """ 858 Parses a subversion configuration section. 859 860 We read the following individual fields:: 861 862 collectMode //cb_config/subversion/collect_mode 863 compressMode //cb_config/subversion/compress_mode 864 865 We also read groups of the following item, one list element per 866 item:: 867 868 repositories //cb_config/subversion/repository 869 repository_dirs //cb_config/subversion/repository_dir 870 871 The repositories are parsed by L{_parseRepositories}, and the repository 872 dirs are parsed by L{_parseRepositoryDirs}. 873 874 @param parent: Parent node to search beneath. 875 876 @return: C{SubversionConfig} object or C{None} if the section does not exist. 877 @raise ValueError: If some filled-in value is invalid. 878 """ 879 subversion = None 880 section = readFirstChild(parent, "subversion") 881 if section is not None: 882 subversion = SubversionConfig() 883 subversion.collectMode = readString(section, "collect_mode") 884 subversion.compressMode = readString(section, "compress_mode") 885 subversion.repositories = LocalConfig._parseRepositories(section) 886 subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) 887 return subversion
    888 889 @staticmethod
    890 - def _parseRepositories(parent):
    891 """ 892 Reads a list of C{Repository} objects from immediately beneath the parent. 893 894 We read the following individual fields:: 895 896 repositoryType type 897 repositoryPath abs_path 898 collectMode collect_mode 899 compressMode compess_mode 900 901 The type field is optional, and its value is kept around only for 902 reference. 903 904 @param parent: Parent node to search beneath. 905 906 @return: List of C{Repository} objects or C{None} if none are found. 907 @raise ValueError: If some filled-in value is invalid. 908 """ 909 lst = [] 910 for entry in readChildren(parent, "repository"): 911 if isElement(entry): 912 repository = Repository() 913 repository.repositoryType = readString(entry, "type") 914 repository.repositoryPath = readString(entry, "abs_path") 915 repository.collectMode = readString(entry, "collect_mode") 916 repository.compressMode = readString(entry, "compress_mode") 917 lst.append(repository) 918 if lst == []: 919 lst = None 920 return lst
    921 922 @staticmethod
    923 - def _addRepository(xmlDom, parentNode, repository):
    924 """ 925 Adds a repository container as the next child of a parent. 926 927 We add the following fields to the document:: 928 929 repositoryType repository/type 930 repositoryPath repository/abs_path 931 collectMode repository/collect_mode 932 compressMode repository/compress_mode 933 934 The <repository> node itself is created as the next child of the parent 935 node. This method only adds one repository node. The parent must loop 936 for each repository in the C{SubversionConfig} object. 937 938 If C{repository} is C{None}, this method call will be a no-op. 939 940 @param xmlDom: DOM tree as from C{impl.createDocument()}. 941 @param parentNode: Parent that the section should be appended to. 942 @param repository: Repository to be added to the document. 943 """ 944 if repository is not None: 945 sectionNode = addContainerNode(xmlDom, parentNode, "repository") 946 addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) 947 addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) 948 addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) 949 addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode)
    950 951 @staticmethod
    952 - def _parseRepositoryDirs(parent):
    953 """ 954 Reads a list of C{RepositoryDir} objects from immediately beneath the parent. 955 956 We read the following individual fields:: 957 958 repositoryType type 959 directoryPath abs_path 960 collectMode collect_mode 961 compressMode compess_mode 962 963 We also read groups of the following items, one list element per 964 item:: 965 966 relativeExcludePaths exclude/rel_path 967 excludePatterns exclude/pattern 968 969 The exclusions are parsed by L{_parseExclusions}. 970 971 The type field is optional, and its value is kept around only for 972 reference. 973 974 @param parent: Parent node to search beneath. 975 976 @return: List of C{RepositoryDir} objects or C{None} if none are found. 977 @raise ValueError: If some filled-in value is invalid. 978 """ 979 lst = [] 980 for entry in readChildren(parent, "repository_dir"): 981 if isElement(entry): 982 repositoryDir = RepositoryDir() 983 repositoryDir.repositoryType = readString(entry, "type") 984 repositoryDir.directoryPath = readString(entry, "abs_path") 985 repositoryDir.collectMode = readString(entry, "collect_mode") 986 repositoryDir.compressMode = readString(entry, "compress_mode") 987 (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) 988 lst.append(repositoryDir) 989 if lst == []: 990 lst = None 991 return lst
    992 993 @staticmethod
    994 - def _parseExclusions(parentNode):
    995 """ 996 Reads exclusions data from immediately beneath the parent. 997 998 We read groups of the following items, one list element per item:: 999 1000 relative exclude/rel_path 1001 patterns exclude/pattern 1002 1003 If there are none of some pattern (i.e. no relative path items) then 1004 C{None} will be returned for that item in the tuple. 1005 1006 @param parentNode: Parent node to search beneath. 1007 1008 @return: Tuple of (relative, patterns) exclusions. 1009 """ 1010 section = readFirstChild(parentNode, "exclude") 1011 if section is None: 1012 return (None, None) 1013 else: 1014 relative = readStringList(section, "rel_path") 1015 patterns = readStringList(section, "pattern") 1016 return (relative, patterns)
    1017 1018 @staticmethod
    1019 - def _addRepositoryDir(xmlDom, parentNode, repositoryDir):
    1020 """ 1021 Adds a repository dir container as the next child of a parent. 1022 1023 We add the following fields to the document:: 1024 1025 repositoryType repository_dir/type 1026 directoryPath repository_dir/abs_path 1027 collectMode repository_dir/collect_mode 1028 compressMode repository_dir/compress_mode 1029 1030 We also add groups of the following items, one list element per item:: 1031 1032 relativeExcludePaths dir/exclude/rel_path 1033 excludePatterns dir/exclude/pattern 1034 1035 The <repository_dir> node itself is created as the next child of the 1036 parent node. This method only adds one repository node. The parent must 1037 loop for each repository dir in the C{SubversionConfig} object. 1038 1039 If C{repositoryDir} is C{None}, this method call will be a no-op. 1040 1041 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1042 @param parentNode: Parent that the section should be appended to. 1043 @param repositoryDir: Repository dir to be added to the document. 1044 """ 1045 if repositoryDir is not None: 1046 sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") 1047 addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) 1048 addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) 1049 addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) 1050 addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) 1051 if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or 1052 (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): 1053 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1054 if repositoryDir.relativeExcludePaths is not None: 1055 for relativePath in repositoryDir.relativeExcludePaths: 1056 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1057 if repositoryDir.excludePatterns is not None: 1058 for pattern in repositoryDir.excludePatterns: 1059 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1060
    1061 1062 ######################################################################## 1063 # Public functions 1064 ######################################################################## 1065 1066 ########################### 1067 # executeAction() function 1068 ########################### 1069 1070 -def executeAction(configPath, options, config):
    1071 """ 1072 Executes the Subversion backup action. 1073 1074 @param configPath: Path to configuration file on disk. 1075 @type configPath: String representing a path on disk. 1076 1077 @param options: Program command-line options. 1078 @type options: Options object. 1079 1080 @param config: Program configuration. 1081 @type config: Config object. 1082 1083 @raise ValueError: Under many generic error conditions 1084 @raise IOError: If a backup could not be written for some reason. 1085 """ 1086 logger.debug("Executing Subversion extended action.") 1087 if config.options is None or config.collect is None: 1088 raise ValueError("Cedar Backup configuration is not properly filled in.") 1089 local = LocalConfig(xmlPath=configPath) 1090 todayIsStart = isStartOfWeek(config.options.startingDay) 1091 fullBackup = options.full or todayIsStart 1092 logger.debug("Full backup flag is [%s]" % fullBackup) 1093 if local.subversion.repositories is not None: 1094 for repository in local.subversion.repositories: 1095 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1096 if local.subversion.repositoryDirs is not None: 1097 for repositoryDir in local.subversion.repositoryDirs: 1098 logger.debug("Working with repository directory [%s]." % repositoryDir.directoryPath) 1099 for repositoryPath in _getRepositoryPaths(repositoryDir): 1100 repository = Repository(repositoryDir.repositoryType, repositoryPath, 1101 repositoryDir.collectMode, repositoryDir.compressMode) 1102 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1103 logger.info("Completed backing up Subversion repository directory [%s]." % repositoryDir.directoryPath) 1104 logger.info("Executed the Subversion extended action successfully.")
    1105
    1106 -def _getCollectMode(local, repository):
    1107 """ 1108 Gets the collect mode that should be used for a repository. 1109 Use repository's if possible, otherwise take from subversion section. 1110 @param repository: Repository object. 1111 @return: Collect mode to use. 1112 """ 1113 if repository.collectMode is None: 1114 collectMode = local.subversion.collectMode 1115 else: 1116 collectMode = repository.collectMode 1117 logger.debug("Collect mode is [%s]" % collectMode) 1118 return collectMode
    1119
    1120 -def _getCompressMode(local, repository):
    1121 """ 1122 Gets the compress mode that should be used for a repository. 1123 Use repository's if possible, otherwise take from subversion section. 1124 @param local: LocalConfig object. 1125 @param repository: Repository object. 1126 @return: Compress mode to use. 1127 """ 1128 if repository.compressMode is None: 1129 compressMode = local.subversion.compressMode 1130 else: 1131 compressMode = repository.compressMode 1132 logger.debug("Compress mode is [%s]" % compressMode) 1133 return compressMode
    1134
    1135 -def _getRevisionPath(config, repository):
    1136 """ 1137 Gets the path to the revision file associated with a repository. 1138 @param config: Config object. 1139 @param repository: Repository object. 1140 @return: Absolute path to the revision file associated with the repository. 1141 """ 1142 normalized = buildNormalizedPath(repository.repositoryPath) 1143 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1144 revisionPath = os.path.join(config.options.workingDir, filename) 1145 logger.debug("Revision file path is [%s]" % revisionPath) 1146 return revisionPath
    1147
    1148 -def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision):
    1149 """ 1150 Gets the backup file path (including correct extension) associated with a repository. 1151 @param config: Config object. 1152 @param repositoryPath: Path to the indicated repository 1153 @param compressMode: Compress mode to use for this repository. 1154 @param startRevision: Starting repository revision. 1155 @param endRevision: Ending repository revision. 1156 @return: Absolute path to the backup file associated with the repository. 1157 """ 1158 normalizedPath = buildNormalizedPath(repositoryPath) 1159 filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) 1160 if compressMode == 'gzip': 1161 filename = "%s.gz" % filename 1162 elif compressMode == 'bzip2': 1163 filename = "%s.bz2" % filename 1164 backupPath = os.path.join(config.collect.targetDir, filename) 1165 logger.debug("Backup file path is [%s]" % backupPath) 1166 return backupPath
    1167
    1168 -def _getRepositoryPaths(repositoryDir):
    1169 """ 1170 Gets a list of child repository paths within a repository directory. 1171 @param repositoryDir: RepositoryDirectory 1172 """ 1173 (excludePaths, excludePatterns) = _getExclusions(repositoryDir) 1174 fsList = FilesystemList() 1175 fsList.excludeFiles = True 1176 fsList.excludeLinks = True 1177 fsList.excludePaths = excludePaths 1178 fsList.excludePatterns = excludePatterns 1179 fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) 1180 return fsList
    1181
    1182 -def _getExclusions(repositoryDir):
    1183 """ 1184 Gets exclusions (file and patterns) associated with an repository directory. 1185 1186 The returned files value is a list of absolute paths to be excluded from the 1187 backup for a given directory. It is derived from the repository directory's 1188 relative exclude paths. 1189 1190 The returned patterns value is a list of patterns to be excluded from the 1191 backup for a given directory. It is derived from the repository directory's 1192 list of patterns. 1193 1194 @param repositoryDir: Repository directory object. 1195 1196 @return: Tuple (files, patterns) indicating what to exclude. 1197 """ 1198 paths = [] 1199 if repositoryDir.relativeExcludePaths is not None: 1200 for relativePath in repositoryDir.relativeExcludePaths: 1201 paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) 1202 patterns = [] 1203 if repositoryDir.excludePatterns is not None: 1204 patterns.extend(repositoryDir.excludePatterns) 1205 logger.debug("Exclude paths: %s" % paths) 1206 logger.debug("Exclude patterns: %s" % patterns) 1207 return(paths, patterns)
    1208
    1209 -def _backupRepository(config, local, todayIsStart, fullBackup, repository):
    1210 """ 1211 Backs up an individual Subversion repository. 1212 1213 This internal method wraps the public methods and adds some functionality 1214 to work better with the extended action itself. 1215 1216 @param config: Cedar Backup configuration. 1217 @param local: Local configuration 1218 @param todayIsStart: Indicates whether today is start of week 1219 @param fullBackup: Full backup flag 1220 @param repository: Repository to operate on 1221 1222 @raise ValueError: If some value is missing or invalid. 1223 @raise IOError: If there is a problem executing the Subversion dump. 1224 """ 1225 logger.debug("Working with repository [%s]" % repository.repositoryPath) 1226 logger.debug("Repository type is [%s]" % repository.repositoryType) 1227 collectMode = _getCollectMode(local, repository) 1228 compressMode = _getCompressMode(local, repository) 1229 revisionPath = _getRevisionPath(config, repository) 1230 if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): 1231 logger.debug("Repository will not be backed up, per collect mode.") 1232 return 1233 logger.debug("Repository meets criteria to be backed up today.") 1234 if collectMode != "incr" or fullBackup: 1235 startRevision = 0 1236 endRevision = getYoungestRevision(repository.repositoryPath) 1237 logger.debug("Using full backup, revision: (%d, %d)." % (startRevision, endRevision)) 1238 else: 1239 if fullBackup: 1240 startRevision = 0 1241 endRevision = getYoungestRevision(repository.repositoryPath) 1242 else: 1243 startRevision = _loadLastRevision(revisionPath) + 1 1244 endRevision = getYoungestRevision(repository.repositoryPath) 1245 if startRevision > endRevision: 1246 logger.info("No need to back up repository [%s]; no new revisions." % repository.repositoryPath) 1247 return 1248 logger.debug("Using incremental backup, revision: (%d, %d)." % (startRevision, endRevision)) 1249 backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) 1250 outputFile = _getOutputFile(backupPath, compressMode) 1251 try: 1252 backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) 1253 finally: 1254 outputFile.close() 1255 if not os.path.exists(backupPath): 1256 raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) 1257 changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) 1258 if collectMode == "incr": 1259 _writeLastRevision(config, revisionPath, endRevision) 1260 logger.info("Completed backing up Subversion repository [%s]." % repository.repositoryPath)
    1261
    1262 -def _getOutputFile(backupPath, compressMode):
    1263 """ 1264 Opens the output file used for saving the Subversion dump. 1265 1266 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1267 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1268 return an object from the normal C{open()} method. 1269 1270 @param backupPath: Path to file to open. 1271 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1272 1273 @return: Output file object. 1274 """ 1275 if compressMode == "gzip": 1276 return GzipFile(backupPath, "w") 1277 elif compressMode == "bzip2": 1278 return BZ2File(backupPath, "w") 1279 else: 1280 return open(backupPath, "w")
    1281
    1282 -def _loadLastRevision(revisionPath):
    1283 """ 1284 Loads the indicated revision file from disk into an integer. 1285 1286 If we can't load the revision file successfully (either because it doesn't 1287 exist or for some other reason), then a revision of -1 will be returned - 1288 but the condition will be logged. This way, we err on the side of backing 1289 up too much, because anyone using this will presumably be adding 1 to the 1290 revision, so they don't duplicate any backups. 1291 1292 @param revisionPath: Path to the revision file on disk. 1293 1294 @return: Integer representing last backed-up revision, -1 on error or if none can be read. 1295 """ 1296 if not os.path.isfile(revisionPath): 1297 startRevision = -1 1298 logger.debug("Revision file [%s] does not exist on disk." % revisionPath) 1299 else: 1300 try: 1301 startRevision = pickle.load(open(revisionPath, "r")) 1302 logger.debug("Loaded revision file [%s] from disk: %d." % (revisionPath, startRevision)) 1303 except: 1304 startRevision = -1 1305 logger.error("Failed loading revision file [%s] from disk." % revisionPath) 1306 return startRevision
    1307
    1308 -def _writeLastRevision(config, revisionPath, endRevision):
    1309 """ 1310 Writes the end revision to the indicated revision file on disk. 1311 1312 If we can't write the revision file successfully for any reason, we'll log 1313 the condition but won't throw an exception. 1314 1315 @param config: Config object. 1316 @param revisionPath: Path to the revision file on disk. 1317 @param endRevision: Last revision backed up on this run. 1318 """ 1319 try: 1320 pickle.dump(endRevision, open(revisionPath, "w")) 1321 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1322 logger.debug("Wrote new revision file [%s] to disk: %d." % (revisionPath, endRevision)) 1323 except: 1324 logger.error("Failed to write revision file [%s] to disk." % revisionPath)
    1325
    1326 1327 ############################## 1328 # backupRepository() function 1329 ############################## 1330 1331 -def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1332 """ 1333 Backs up an individual Subversion repository. 1334 1335 The starting and ending revision values control an incremental backup. If 1336 the starting revision is not passed in, then revision zero (the start of the 1337 repository) is assumed. If the ending revision is not passed in, then the 1338 youngest revision in the database will be used as the endpoint. 1339 1340 The backup data will be written into the passed-in back file. Normally, 1341 this would be an object as returned from C{open}, but it is possible to use 1342 something like a C{GzipFile} to write compressed output. The caller is 1343 responsible for closing the passed-in backup file. 1344 1345 @note: This function should either be run as root or as the owner of the 1346 Subversion repository. 1347 1348 @note: It is apparently I{not} a good idea to interrupt this function. 1349 Sometimes, this leaves the repository in a "wedged" state, which requires 1350 recovery using C{svnadmin recover}. 1351 1352 @param repositoryPath: Path to Subversion repository to back up 1353 @type repositoryPath: String path representing Subversion repository on disk. 1354 1355 @param backupFile: Python file object to use for writing backup. 1356 @type backupFile: Python file object as from C{open()} or C{file()}. 1357 1358 @param startRevision: Starting repository revision to back up (for incremental backups) 1359 @type startRevision: Integer value >= 0. 1360 1361 @param endRevision: Ending repository revision to back up (for incremental backups) 1362 @type endRevision: Integer value >= 0. 1363 1364 @raise ValueError: If some value is missing or invalid. 1365 @raise IOError: If there is a problem executing the Subversion dump. 1366 """ 1367 if startRevision is None: 1368 startRevision = 0 1369 if endRevision is None: 1370 endRevision = getYoungestRevision(repositoryPath) 1371 if int(startRevision) < 0: 1372 raise ValueError("Start revision must be >= 0.") 1373 if int(endRevision) < 0: 1374 raise ValueError("End revision must be >= 0.") 1375 if startRevision > endRevision: 1376 raise ValueError("Start revision must be <= end revision.") 1377 args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] 1378 command = resolveCommand(SVNADMIN_COMMAND) 1379 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 1380 if result != 0: 1381 raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) 1382 logger.debug("Completed dumping subversion repository [%s]." % repositoryPath)
    1383
    1384 1385 ################################# 1386 # getYoungestRevision() function 1387 ################################# 1388 1389 -def getYoungestRevision(repositoryPath):
    1390 """ 1391 Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. 1392 1393 @note: This function should either be run as root or as the owner of the 1394 Subversion repository. 1395 1396 @param repositoryPath: Path to Subversion repository to look in. 1397 @type repositoryPath: String path representing Subversion repository on disk. 1398 1399 @return: Youngest revision as an integer. 1400 1401 @raise ValueError: If there is a problem parsing the C{svnlook} output. 1402 @raise IOError: If there is a problem executing the C{svnlook} command. 1403 """ 1404 args = [ 'youngest', repositoryPath, ] 1405 command = resolveCommand(SVNLOOK_COMMAND) 1406 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 1407 if result != 0: 1408 raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) 1409 if len(output) != 1: 1410 raise ValueError("Unable to parse 'svnlook youngest' output.") 1411 return int(output[0])
    1412
    1413 1414 ######################################################################## 1415 # Deprecated functionality 1416 ######################################################################## 1417 1418 -class BDBRepository(Repository):
    1419 1420 """ 1421 Class representing Subversion BDB (Berkeley Database) repository configuration. 1422 This object is deprecated. Use a simple L{Repository} instead. 1423 """ 1424
    1425 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1426 """ 1427 Constructor for the C{BDBRepository} class. 1428 """ 1429 super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode)
    1430
    1431 - def __repr__(self):
    1432 """ 1433 Official string representation for class instance. 1434 """ 1435 return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1436
    1437 1438 -class FSFSRepository(Repository):
    1439 1440 """ 1441 Class representing Subversion FSFS repository configuration. 1442 This object is deprecated. Use a simple L{Repository} instead. 1443 """ 1444
    1445 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1446 """ 1447 Constructor for the C{FSFSRepository} class. 1448 """ 1449 super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode)
    1450
    1451 - def __repr__(self):
    1452 """ 1453 Official string representation for class instance. 1454 """ 1455 return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1456
    1457 1458 -def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1459 """ 1460 Backs up an individual Subversion BDB repository. 1461 This function is deprecated. Use L{backupRepository} instead. 1462 """ 1463 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1464
    1465 1466 -def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1467 """ 1468 Backs up an individual Subversion FSFS repository. 1469 This function is deprecated. Use L{backupRepository} instead. 1470 """ 1471 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1472

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.xmlutil-module.html0000664000175000017500000014366712143054362026367 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil
    Package CedarBackup2 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Module xmlutil

    source code

    Provides general XML-related functionality.

    What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Serializer
    XML serializer class.
    Functions [hide private]
     
    createInputDom(xmlData, name='cb_config')
    Creates a DOM tree based on reading an XML string.
    source code
     
    createOutputDom(name='cb_config')
    Creates a DOM tree used for writing an XML document.
    source code
     
    serializeDom(xmlDom, indent=3)
    Serializes a DOM tree and returns the result in a string.
    source code
     
    isElement(node)
    Returns True or False depending on whether the XML node is an element node.
    source code
     
    readChildren(parent, name)
    Returns a list of nodes with a given name immediately beneath the parent.
    source code
     
    readFirstChild(parent, name)
    Returns the first child with a given name immediately beneath the parent.
    source code
     
    readStringList(parent, name)
    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.
    source code
     
    readString(parent, name)
    Returns string contents of the first child with a given name immediately beneath the parent.
    source code
     
    readInteger(parent, name)
    Returns integer contents of the first child with a given name immediately beneath the parent.
    source code
     
    readBoolean(parent, name)
    Returns boolean contents of the first child with a given name immediately beneath the parent.
    source code
     
    addContainerNode(xmlDom, parentNode, nodeName)
    Adds a container node as the next child of a parent node.
    source code
     
    addStringNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a string.
    source code
     
    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain an integer.
    source code
     
    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a boolean.
    source code
     
    readFloat(parent, name)
    Returns float contents of the first child with a given name immediately beneath the parent.
    source code
     
    _encodeText(text, encoding) source code
     
    _translateCDATAAttr(characters)
    Handles normalization and some intelligence about quoting.
    source code
     
    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0) source code
    Variables [hide private]
      TRUE_BOOLEAN_VALUES = ['Y', 'y']
    List of boolean values in XML representing True.
      FALSE_BOOLEAN_VALUES = ['N', 'n']
    List of boolean values in XML representing False.
      VALID_BOOLEAN_VALUES = ['Y', 'y', 'N', 'n']
    List of valid boolean values in XML.
      logger = logging.getLogger("CedarBackup2.log.xml")
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    createInputDom(xmlData, name='cb_config')

    source code 

    Creates a DOM tree based on reading an XML string.

    Parameters:
    • name - Assumed base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the parsed document
    Raises:
    • ValueError - If the document can't be parsed.

    createOutputDom(name='cb_config')

    source code 

    Creates a DOM tree used for writing an XML document.

    Parameters:
    • name - Base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the new document

    serializeDom(xmlDom, indent=3)

    source code 

    Serializes a DOM tree and returns the result in a string.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    • indent - Number of spaces to indent, as an integer
    Returns:
    String form of DOM tree, pretty-printed.

    readChildren(parent, name)

    source code 

    Returns a list of nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Underneath, we use the Python getElementsByTagName method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose parentNode matches the passed-in parent.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of nodes to search for.
    Returns:
    List of child nodes with correct parent, or an empty list if no matching nodes are found.

    readFirstChild(parent, name)

    source code 

    Returns the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    First properly-named child of parent, or None if no matching nodes are found.

    readStringList(parent, name)

    source code 

    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    First, we find all of the nodes using readChildren, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node. Nodes which have no TEXT_NODE children are not represented in the returned list.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    List of strings as described above, or None if no matching nodes are found.

    readString(parent, name)

    source code 

    Returns string contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    String contents of node or None if no matching nodes are found.

    readInteger(parent, name)

    source code 

    Returns integer contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Integer contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to an integer.

    readBoolean(parent, name)

    source code 

    Returns boolean contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    The string value of the node must be one of the values in VALID_BOOLEAN_VALUES.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Boolean contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a boolean.

    addContainerNode(xmlDom, parentNode, nodeName)

    source code 

    Adds a container node as the next child of a parent node.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    Returns:
    Reference to the newly-created node.

    addStringNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a string.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain an integer.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The integer will be converted to a string using "%d". The result will be added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a boolean.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Boolean True, or anything else interpreted as True by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    readFloat(parent, name)

    source code 

    Returns float contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Float contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a float value.

    _encodeText(text, encoding)

    source code 

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was attributed to Martin v. Löwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    _translateCDATAAttr(characters)

    source code 

    Handles normalization and some intelligence about quoting.

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0)

    source code 

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.


    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter._ImageProperties-class.html0000664000175000017500000002172212143054363033442 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter._ImageProperties
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.filesystem-module.html0000664000175000017500000000401412143054362027616 0ustar pronovicpronovic00000000000000 filesystem

    Module filesystem


    Classes

    BackupFileList
    FilesystemList
    PurgeItemList
    SpanItem

    Functions

    compareContents
    compareDigestMaps
    normalizeDir

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.customize-module.html0000664000175000017500000002610012143054362026671 0ustar pronovicpronovic00000000000000 CedarBackup2.customize
    Package CedarBackup2 :: Module customize
    [hide private]
    [frames] | no frames]

    Module customize

    source code

    Implements customized behavior.

    Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    customizeOverrides(config, platform='standard')
    Modify command overrides based on the configured platform.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.customize")
      PLATFORM = 'standard'
      DEBIAN_CDRECORD = '/usr/bin/wodim'
      DEBIAN_MKISOFS = '/usr/bin/genisoimage'
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    customizeOverrides(config, platform='standard')

    source code 

    Modify command overrides based on the configured platform.

    On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want.

    Parameters:
    • config - Configuration to modify
    • platform - Platform that is in use

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.validate-pysrc.html0000664000175000017500000031765712143054364030000 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.validate
    Package CedarBackup2 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.validate

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: validate.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Implements the standard 'validate' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'validate' action. 
     41  @sort: executeValidate 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import os 
     52  import logging 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup2.util import getUidGid, getFunctionReference 
     56  from CedarBackup2.actions.util import createWriter 
     57   
     58   
     59  ######################################################################## 
     60  # Module-wide constants and variables 
     61  ######################################################################## 
     62   
     63  logger = logging.getLogger("CedarBackup2.log.actions.validate") 
     64   
     65   
     66  ######################################################################## 
     67  # Public functions 
     68  ######################################################################## 
     69   
     70  ############################# 
     71  # executeValidate() function 
     72  ############################# 
     73   
    
    74 -def executeValidate(configPath, options, config):
    75 """ 76 Executes the validate action. 77 78 This action validates each of the individual sections in the config file. 79 This is a "runtime" validation. The config file itself is already valid in 80 a structural sense, so what we check here that is that we can actually use 81 the configuration without any problems. 82 83 There's a separate validation function for each of the configuration 84 sections. Each validation function returns a true/false indication for 85 whether configuration was valid, and then logs any configuration problems it 86 finds. This way, one pass over configuration indicates most or all of the 87 obvious problems, rather than finding just one problem at a time. 88 89 Any reported problems will be logged at the ERROR level normally, or at the 90 INFO level if the quiet flag is enabled. 91 92 @param configPath: Path to configuration file on disk. 93 @type configPath: String representing a path on disk. 94 95 @param options: Program command-line options. 96 @type options: Options object. 97 98 @param config: Program configuration. 99 @type config: Config object. 100 101 @raise ValueError: If some configuration value is invalid. 102 """ 103 logger.debug("Executing the 'validate' action.") 104 if options.quiet: 105 logfunc = logger.info # info so it goes to the log 106 else: 107 logfunc = logger.error # error so it goes to the screen 108 valid = True 109 valid &= _validateReference(config, logfunc) 110 valid &= _validateOptions(config, logfunc) 111 valid &= _validateCollect(config, logfunc) 112 valid &= _validateStage(config, logfunc) 113 valid &= _validateStore(config, logfunc) 114 valid &= _validatePurge(config, logfunc) 115 valid &= _validateExtensions(config, logfunc) 116 if valid: 117 logfunc("Configuration is valid.") 118 else: 119 logfunc("Configuration is not valid.")
    120 121 122 ######################################################################## 123 # Private utility functions 124 ######################################################################## 125 126 ####################### 127 # _checkDir() function 128 ####################### 129
    130 -def _checkDir(path, writable, logfunc, prefix):
    131 """ 132 Checks that the indicated directory is OK. 133 134 The path must exist, must be a directory, must be readable and executable, 135 and must optionally be writable. 136 137 @param path: Path to check. 138 @param writable: Check that path is writable. 139 @param logfunc: Function to use for logging errors. 140 @param prefix: Prefix to use on logged errors. 141 142 @return: True if the directory is OK, False otherwise. 143 """ 144 if not os.path.exists(path): 145 logfunc("%s [%s] does not exist." % (prefix, path)) 146 return False 147 if not os.path.isdir(path): 148 logfunc("%s [%s] is not a directory." % (prefix, path)) 149 return False 150 if not os.access(path, os.R_OK): 151 logfunc("%s [%s] is not readable." % (prefix, path)) 152 return False 153 if not os.access(path, os.X_OK): 154 logfunc("%s [%s] is not executable." % (prefix, path)) 155 return False 156 if writable and not os.access(path, os.W_OK): 157 logfunc("%s [%s] is not writable." % (prefix, path)) 158 return False 159 return True
    160 161 162 ################################ 163 # _validateReference() function 164 ################################ 165
    166 -def _validateReference(config, logfunc):
    167 """ 168 Execute runtime validations on reference configuration. 169 170 We only validate that reference configuration exists at all. 171 172 @param config: Program configuration. 173 @param logfunc: Function to use for logging errors 174 175 @return: True if configuration is valid, false otherwise. 176 """ 177 valid = True 178 if config.reference is None: 179 logfunc("Required reference configuration does not exist.") 180 valid = False 181 return valid
    182 183 184 ############################## 185 # _validateOptions() function 186 ############################## 187
    188 -def _validateOptions(config, logfunc):
    189 """ 190 Execute runtime validations on options configuration. 191 192 The following validations are enforced: 193 194 - The options section must exist 195 - The working directory must exist and must be writable 196 - The backup user and backup group must exist 197 198 @param config: Program configuration. 199 @param logfunc: Function to use for logging errors 200 201 @return: True if configuration is valid, false otherwise. 202 """ 203 valid = True 204 if config.options is None: 205 logfunc("Required options configuration does not exist.") 206 valid = False 207 else: 208 valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") 209 try: 210 getUidGid(config.options.backupUser, config.options.backupGroup) 211 except ValueError: 212 logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) 213 valid = False 214 return valid
    215 216 217 ############################## 218 # _validateCollect() function 219 ############################## 220
    221 -def _validateCollect(config, logfunc):
    222 """ 223 Execute runtime validations on collect configuration. 224 225 The following validations are enforced: 226 227 - The target directory must exist and must be writable 228 - Each of the individual collect directories must exist and must be readable 229 230 @param config: Program configuration. 231 @param logfunc: Function to use for logging errors 232 233 @return: True if configuration is valid, false otherwise. 234 """ 235 valid = True 236 if config.collect is not None: 237 valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") 238 if config.collect.collectDirs is not None: 239 for collectDir in config.collect.collectDirs: 240 valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") 241 return valid
    242 243 244 ############################ 245 # _validateStage() function 246 ############################ 247
    248 -def _validateStage(config, logfunc):
    249 """ 250 Execute runtime validations on stage configuration. 251 252 The following validations are enforced: 253 254 - The target directory must exist and must be writable 255 - Each local peer's collect directory must exist and must be readable 256 257 @note: We currently do not validate anything having to do with remote peers, 258 since we don't have a straightforward way of doing it. It would require 259 adding an rsh command rather than just an rcp command to configuration, and 260 that just doesn't seem worth it right now. 261 262 @param config: Program configuration. 263 @param logfunc: Function to use for logging errors 264 265 @return: True if configuration is valid, False otherwise. 266 """ 267 valid = True 268 if config.stage is not None: 269 valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") 270 if config.stage.localPeers is not None: 271 for peer in config.stage.localPeers: 272 valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") 273 return valid
    274 275 276 ############################ 277 # _validateStore() function 278 ############################ 279
    280 -def _validateStore(config, logfunc):
    281 """ 282 Execute runtime validations on store configuration. 283 284 The following validations are enforced: 285 286 - The source directory must exist and must be readable 287 - The backup device (path and SCSI device) must be valid 288 289 @param config: Program configuration. 290 @param logfunc: Function to use for logging errors 291 292 @return: True if configuration is valid, False otherwise. 293 """ 294 valid = True 295 if config.store is not None: 296 valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") 297 try: 298 createWriter(config) 299 except ValueError: 300 logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) 301 valid = False 302 return valid
    303 304 305 ############################ 306 # _validatePurge() function 307 ############################ 308
    309 -def _validatePurge(config, logfunc):
    310 """ 311 Execute runtime validations on purge configuration. 312 313 The following validations are enforced: 314 315 - Each purge directory must exist and must be writable 316 317 @param config: Program configuration. 318 @param logfunc: Function to use for logging errors 319 320 @return: True if configuration is valid, False otherwise. 321 """ 322 valid = True 323 if config.purge is not None: 324 if config.purge.purgeDirs is not None: 325 for purgeDir in config.purge.purgeDirs: 326 valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") 327 return valid
    328 329 330 ################################# 331 # _validateExtensions() function 332 ################################# 333
    334 -def _validateExtensions(config, logfunc):
    335 """ 336 Execute runtime validations on extensions configuration. 337 338 The following validations are enforced: 339 340 - Each indicated extension function must exist. 341 342 @param config: Program configuration. 343 @param logfunc: Function to use for logging errors 344 345 @return: True if configuration is valid, False otherwise. 346 """ 347 valid = True 348 if config.extensions is not None: 349 if config.extensions.actions is not None: 350 for action in config.extensions.actions: 351 try: 352 getFunctionReference(action.module, action.function) 353 except ImportError: 354 logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) 355 valid = False 356 except ValueError: 357 logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) 358 valid = False 359 return valid
    360

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.RegexMatchList-class.html0000664000175000017500000004522012143054363030333 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RegexMatchList
    Package CedarBackup2 :: Module util :: Class RegexMatchList
    [hide private]
    [frames] | no frames]

    Class RegexMatchList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexMatchList
    

    Class representing a list containing only strings that match a regular expression.

    If emptyAllowed is passed in as False, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (None values are always disallowed, since string operations are not permitted on None.)

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the indicated regular expression.


    Note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesRegex - Regular expression that must be matched, as a string
    • emptyAllowed - Indicates whether empty or None values are allowed.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is None
    • ValueError - If any item is empty and empty values are not allowed
    • ValueError - If any item does not match the configured regular expression
    Overrides: list.extend

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.Config-class.html0000664000175000017500000055602112143054362027152 0ustar pronovicpronovic00000000000000 CedarBackup2.config.Config
    Package CedarBackup2 :: Module config :: Class Config
    [hide private]
    [frames] | no frames]

    Class Config

    source code

    object --+
             |
            Config
    

    Class representing a Cedar Backup XML configuration document.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications.

    The object representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    This class and the classes it is composed from often use Python's property construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details).

    Assignments to the various instance variables must match the expected type, i.e. reference must be a ReferenceConfig. The internal check uses the built-in isinstance function, so it should be OK to use subclasses if you want to.

    If an instance variable is not set, its value will be None. When an object is initialized without using an XML document, all of the values will be None. Even when an object is initialized using XML, some of the values might be None because not every section is required.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    extractXml(self, xmlPath=None, validate=True)
    Extracts configuration into an XML document.
    source code
     
    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)
    Validates configuration represented by the object.
    source code
     
    _getReference(self)
    Property target used to get the reference configuration value.
    source code
     
    _setReference(self, value)
    Property target used to set the reference configuration value.
    source code
     
    _getExtensions(self)
    Property target used to get the extensions configuration value.
    source code
     
    _setExtensions(self, value)
    Property target used to set the extensions configuration value.
    source code
     
    _getOptions(self)
    Property target used to get the options configuration value.
    source code
     
    _setOptions(self, value)
    Property target used to set the options configuration value.
    source code
     
    _getPeers(self)
    Property target used to get the peers configuration value.
    source code
     
    _setPeers(self, value)
    Property target used to set the peers configuration value.
    source code
     
    _getCollect(self)
    Property target used to get the collect configuration value.
    source code
     
    _setCollect(self, value)
    Property target used to set the collect configuration value.
    source code
     
    _getStage(self)
    Property target used to get the stage configuration value.
    source code
     
    _setStage(self, value)
    Property target used to set the stage configuration value.
    source code
     
    _getStore(self)
    Property target used to get the store configuration value.
    source code
     
    _setStore(self, value)
    Property target used to set the store configuration value.
    source code
     
    _getPurge(self)
    Property target used to get the purge configuration value.
    source code
     
    _setPurge(self, value)
    Property target used to set the purge configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    _extractXml(self)
    Internal method to extract configuration into an XML string.
    source code
     
    _validateContents(self)
    Validates configuration contents per rules discussed in module documentation.
    source code
     
    _validateReference(self)
    Validates reference configuration.
    source code
     
    _validateExtensions(self)
    Validates extensions configuration.
    source code
     
    _validateOptions(self)
    Validates options configuration.
    source code
     
    _validatePeers(self)
    Validates peers configuration per rules in _validatePeerList.
    source code
     
    _validateCollect(self)
    Validates collect configuration.
    source code
     
    _validateStage(self)
    Validates stage configuration.
    source code
     
    _validateStore(self)
    Validates store configuration.
    source code
     
    _validatePurge(self)
    Validates purge configuration.
    source code
     
    _validatePeerList(self, localPeers, remotePeers)
    Validates the set of local and remote peers.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseReference(parentNode)
    Parses a reference configuration section.
    source code
     
    _parseExtensions(parentNode)
    Parses an extensions configuration section.
    source code
     
    _parseOptions(parentNode)
    Parses a options configuration section.
    source code
     
    _parsePeers(parentNode)
    Parses a peers configuration section.
    source code
     
    _parseCollect(parentNode)
    Parses a collect configuration section.
    source code
     
    _parseStage(parentNode)
    Parses a stage configuration section.
    source code
     
    _parseStore(parentNode)
    Parses a store configuration section.
    source code
     
    _parsePurge(parentNode)
    Parses a purge configuration section.
    source code
     
    _parseExtendedActions(parentNode)
    Reads extended actions data from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _parseOverrides(parentNode)
    Reads a list of CommandOverride objects from immediately beneath the parent.
    source code
     
    _parseHooks(parentNode)
    Reads a list of ActionHook objects from immediately beneath the parent.
    source code
     
    _parseCollectFiles(parentNode)
    Reads a list of CollectFile objects from immediately beneath the parent.
    source code
     
    _parseCollectDirs(parentNode)
    Reads a list of CollectDir objects from immediately beneath the parent.
    source code
     
    _parsePurgeDirs(parentNode)
    Reads a list of PurgeDir objects from immediately beneath the parent.
    source code
     
    _parsePeerList(parentNode)
    Reads remote and local peer data from immediately beneath the parent.
    source code
     
    _parseDependencies(parentNode)
    Reads extended action dependency information from a parent node.
    source code
     
    _parseBlankBehavior(parentNode)
    Reads a single BlankBehavior object from immediately beneath the parent.
    source code
     
    _addReference(xmlDom, parentNode, referenceConfig)
    Adds a <reference> configuration section as the next child of a parent.
    source code
     
    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Adds an <extensions> configuration section as the next child of a parent.
    source code
     
    _addOptions(xmlDom, parentNode, optionsConfig)
    Adds a <options> configuration section as the next child of a parent.
    source code
     
    _addPeers(xmlDom, parentNode, peersConfig)
    Adds a <peers> configuration section as the next child of a parent.
    source code
     
    _addCollect(xmlDom, parentNode, collectConfig)
    Adds a <collect> configuration section as the next child of a parent.
    source code
     
    _addStage(xmlDom, parentNode, stageConfig)
    Adds a <stage> configuration section as the next child of a parent.
    source code
     
    _addStore(xmlDom, parentNode, storeConfig)
    Adds a <store> configuration section as the next child of a parent.
    source code
     
    _addPurge(xmlDom, parentNode, purgeConfig)
    Adds a <purge> configuration section as the next child of a parent.
    source code
     
    _addExtendedAction(xmlDom, parentNode, action)
    Adds an extended action container as the next child of a parent.
    source code
     
    _addOverride(xmlDom, parentNode, override)
    Adds a command override container as the next child of a parent.
    source code
     
    _addHook(xmlDom, parentNode, hook)
    Adds an action hook container as the next child of a parent.
    source code
     
    _addCollectFile(xmlDom, parentNode, collectFile)
    Adds a collect file container as the next child of a parent.
    source code
     
    _addCollectDir(xmlDom, parentNode, collectDir)
    Adds a collect directory container as the next child of a parent.
    source code
     
    _addLocalPeer(xmlDom, parentNode, localPeer)
    Adds a local peer container as the next child of a parent.
    source code
     
    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Adds a remote peer container as the next child of a parent.
    source code
     
    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Adds a purge directory container as the next child of a parent.
    source code
     
    _addDependencies(xmlDom, parentNode, dependencies)
    Adds a extended action dependencies to parent node.
    source code
     
    _buildCommaSeparatedString(valueList)
    Creates a comma-separated string from a list of values.
    source code
     
    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Adds a blanking behavior container as the next child of a parent.
    source code
    Properties [hide private]
      reference
    Reference configuration in terms of a ReferenceConfig object.
      extensions
    Extensions configuration in terms of a ExtensionsConfig object.
      options
    Options configuration in terms of a OptionsConfig object.
      collect
    Collect configuration in terms of a CollectConfig object.
      stage
    Stage configuration in terms of a StageConfig object.
      store
    Store configuration in terms of a StoreConfig object.
      purge
    Purge configuration in terms of a PurgeConfig object.
      peers
    Peers configuration in terms of a PeersConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath, then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the Config.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    extractXml(self, xmlPath=None, validate=True)

    source code 

    Extracts configuration into an XML document.

    If xmlPath is not provided, then the XML document will be returned as a string. If xmlPath is provided, then the XML document will be written to the file and None will be returned.

    Unless the validate parameter is False, the Config.validate method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted.

    Parameters:
    • xmlPath (Absolute path to a file.) - Path to an XML file to create on disk.
    • validate (Boolean true/false.) - Validate the document before extracting it.
    Returns:
    XML string data or None as described above.
    Raises:
    • ValueError - If configuration within the object is not valid.
    • IOError - If there is an error writing to the file.
    • OSError - If there is an error writing to the file.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to write an invalid configuration file to disk.

    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)

    source code 

    Validates configuration represented by the object.

    This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the Validation section in the class notes (above).

    Parameters:
    • requireOneAction - Require at least one of the collect, stage, store or purge sections.
    • requireReference - Require the reference section.
    • requireExtensions - Require the extensions section.
    • requireOptions - Require the options section.
    • requirePeers - Require the peers section.
    • requireCollect - Require the collect section.
    • requireStage - Require the stage section.
    • requireStore - Require the store section.
    • requirePurge - Require the purge section.
    Raises:
    • ValueError - If one of the validations fails.

    _setReference(self, value)

    source code 

    Property target used to set the reference configuration value. If not None, the value must be a ReferenceConfig object.

    Raises:
    • ValueError - If the value is not a ReferenceConfig

    _setExtensions(self, value)

    source code 

    Property target used to set the extensions configuration value. If not None, the value must be a ExtensionsConfig object.

    Raises:
    • ValueError - If the value is not a ExtensionsConfig

    _setOptions(self, value)

    source code 

    Property target used to set the options configuration value. If not None, the value must be an OptionsConfig object.

    Raises:
    • ValueError - If the value is not a OptionsConfig

    _setPeers(self, value)

    source code 

    Property target used to set the peers configuration value. If not None, the value must be an PeersConfig object.

    Raises:
    • ValueError - If the value is not a PeersConfig

    _setCollect(self, value)

    source code 

    Property target used to set the collect configuration value. If not None, the value must be a CollectConfig object.

    Raises:
    • ValueError - If the value is not a CollectConfig

    _setStage(self, value)

    source code 

    Property target used to set the stage configuration value. If not None, the value must be a StageConfig object.

    Raises:
    • ValueError - If the value is not a StageConfig

    _setStore(self, value)

    source code 

    Property target used to set the store configuration value. If not None, the value must be a StoreConfig object.

    Raises:
    • ValueError - If the value is not a StoreConfig

    _setPurge(self, value)

    source code 

    Property target used to set the purge configuration value. If not None, the value must be a PurgeConfig object.

    Raises:
    • ValueError - If the value is not a PurgeConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls individual static methods to parse each of the individual configuration sections.

    Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the validate method).

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseReference(parentNode)
    Static Method

    source code 

    Parses a reference configuration section.

    We read the following fields:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ReferenceConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtensions(parentNode)
    Static Method

    source code 

    Parses an extensions configuration section.

    We read the following fields:

      orderMode            //cb_config/extensions/order_mode
    

    We also read groups of the following items, one list element per item:

      name                 //cb_config/extensions/action/name
      module               //cb_config/extensions/action/module
      function             //cb_config/extensions/action/function
      index                //cb_config/extensions/action/index
      dependencies         //cb_config/extensions/action/depends
    

    The extended actions are parsed by _parseExtendedActions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ExtensionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseOptions(parentNode)
    Static Method

    source code 

    Parses a options configuration section.

    We read the following fields:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    The list of managed actions is a comma-separated list of action names.

    We also read groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/hook
    

    The overrides are parsed by _parseOverrides and the hooks are parsed by _parseHooks.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    OptionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePeers(parentNode)
    Static Method

    source code 

    Parses a peers configuration section.

    We read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollect(parentNode)
    Static Method

    source code 

    Parses a collect configuration section.

    We read the following individual fields:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The exclusions are parsed by _parseExclusions, the collect files are parsed by _parseCollectFiles, and the directories are parsed by _parseCollectDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CollectConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStage(parentNode)
    Static Method

    source code 

    Parses a stage configuration section.

    We read the following individual fields:

      targetDir      //cb_config/stage/staging_dir
    

    We also read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStore(parentNode)
    Static Method

    source code 

    Parses a store configuration section.

    We read the following fields:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
    

    Blanking behavior configuration is parsed by the _parseBlankBehavior method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StoreConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurge(parentNode)
    Static Method

    source code 

    Parses a purge configuration section.

    We read groups of the following items, one list element per item:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are parsed by _parsePurgeDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    PurgeConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtendedActions(parentNode)
    Static Method

    source code 

    Reads extended actions data from immediately beneath the parent.

    We read the following individual fields from each extended action:

      name           name
      module         module
      function       function
      index          index
      dependencies   depends
    

    Dependency information is parsed by the _parseDependencies method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of extended actions.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      absolute    exclude/abs_path
      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (absolute, relative, patterns) exclusions.

    _parseOverrides(parentNode)
    Static Method

    source code 

    Reads a list of CommandOverride objects from immediately beneath the parent.

    We read the following individual fields:

      command                 command 
      absolutePath            abs_path
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CommandOverride objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseHooks(parentNode)
    Static Method

    source code 

    Reads a list of ActionHook objects from immediately beneath the parent.

    We read the following individual fields:

      action                  action  
      command                 command 
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of ActionHook objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectFiles(parentNode)
    Static Method

    source code 

    Reads a list of CollectFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
    

    The collect mode is a special case. Just a mode tag is accepted, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectDirs(parentNode)
    Static Method

    source code 

    Reads a list of CollectDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
      ignoreFile              ignore_file
      linkDepth               link_depth
      dereference             dereference
      recursionLevel          recursion_level
    

    The collect mode is a special case. Just a mode tag is accepted for backwards compatibility, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths    exclude/abs_path
      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurgeDirs(parentNode)
    Static Method

    source code 

    Reads a list of PurgeDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            <baseExpr>/abs_path
      retainDays              <baseExpr>/retain_days
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of PurgeDir objects or None if none are found.
    Raises:
    • ValueError - If the data at the location can't be read

    _parsePeerList(parentNode)
    Static Method

    source code 

    Reads remote and local peer data from immediately beneath the parent.

    We read the following individual fields for both remote and local peers:

      name        name
      collectDir  collect_dir
    

    We also read the following individual fields for remote peers only:

      remoteUser     backup_user
      rcpCommand     rcp_command
      rshCommand     rsh_command
      cbackCommand   cback_command
      managed        managed
      managedActions managed_actions
    

    Additionally, the value in the type field is used to determine whether this entry is a remote peer. If the type is "remote", it's a remote peer, and if the type is "local", it's a remote peer.

    If there are none of one type of peer (i.e. no local peers) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (local, remote) peer lists.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseDependencies(parentNode)
    Static Method

    source code 

    Reads extended action dependency information from a parent node.

    We read the following individual fields:

      runBefore   depends/run_before
      runAfter    depends/run_after
    

    Each of these fields is a comma-separated list of action names.

    The result is placed into an ActionDependencies object.

    If the dependencies parent node does not exist, None will be returned. Otherwise, an ActionDependencies object will always be created, even if it does not contain any actual dependencies in it.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ActionDependencies object or None.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseBlankBehavior(parentNode)
    Static Method

    source code 

    Reads a single BlankBehavior object from immediately beneath the parent.

    We read the following individual fields:

      blankMode     blank_behavior/mode
      blankFactor   blank_behavior/factor
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    BlankBehavior object or None if none if the section is not found
    Raises:
    • ValueError - If some filled-in value is invalid.

    _extractXml(self)

    source code 

    Internal method to extract configuration into an XML string.

    This method assumes that the internal validate method has been called prior to extracting the XML, if the caller cares. No validation will be done internally.

    As a general rule, fields that are set to None will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or None, the container tag will be empty.

    _addReference(xmlDom, parentNode, referenceConfig)
    Static Method

    source code 

    Adds a <reference> configuration section as the next child of a parent.

    We add the following fields to the document:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    

    If referenceConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • referenceConfig - Reference configuration section to be added to the document.

    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Static Method

    source code 

    Adds an <extensions> configuration section as the next child of a parent.

    We add the following fields to the document:

      order_mode     //cb_config/extensions/order_mode
    

    We also add groups of the following items, one list element per item:

      actions        //cb_config/extensions/action
    

    The extended action entries are added by _addExtendedAction.

    If extensionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • extensionsConfig - Extensions configuration section to be added to the document.

    _addOptions(xmlDom, parentNode, optionsConfig)
    Static Method

    source code 

    Adds a <options> configuration section as the next child of a parent.

    We add the following fields to the document:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    We also add groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/pre_action_hook
      hooks          //cb_config/options/post_action_hook
    

    The individual override items are added by _addOverride. The individual hook items are added by _addHook.

    If optionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • optionsConfig - Options configuration section to be added to the document.

    _addPeers(xmlDom, parentNode, peersConfig)
    Static Method

    source code 

    Adds a <peers> configuration section as the next child of a parent.

    We add groups of the following items, one list element per item:

      localPeers     //cb_config/peers/peer
      remotePeers    //cb_config/peers/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If peersConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • peersConfig - Peers configuration section to be added to the document.

    _addCollect(xmlDom, parentNode, collectConfig)
    Static Method

    source code 

    Adds a <collect> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The individual collect files are added by _addCollectFile and individual collect directories are added by _addCollectDir.

    If collectConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectConfig - Collect configuration section to be added to the document.

    _addStage(xmlDom, parentNode, stageConfig)
    Static Method

    source code 

    Adds a <stage> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir      //cb_config/stage/staging_dir
    

    We also add groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If stageConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • stageConfig - Stage configuration section to be added to the document.

    _addStore(xmlDom, parentNode, storeConfig)
    Static Method

    source code 

    Adds a <store> configuration section as the next child of a parent.

    We add the following fields to the document:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
      refreshMediaDelay //cb_config/store/refresh_media_delay
      ejectDelay        //cb_config/store/eject_delay
    

    Blanking behavior configuration is added by the _addBlankBehavior method.

    If storeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • storeConfig - Store configuration section to be added to the document.

    _addPurge(xmlDom, parentNode, purgeConfig)
    Static Method

    source code 

    Adds a <purge> configuration section as the next child of a parent.

    We add the following fields to the document:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are added by _addPurgeDir.

    If purgeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeConfig - Purge configuration section to be added to the document.

    _addExtendedAction(xmlDom, parentNode, action)
    Static Method

    source code 

    Adds an extended action container as the next child of a parent.

    We add the following fields to the document:

      name           action/name
      module         action/module
      function       action/function
      index          action/index
      dependencies   action/depends
    

    Dependencies are added by the _addDependencies method.

    The <action> node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the ExtensionsConfig object.

    If action is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • action - Purge directory to be added to the document.

    _addOverride(xmlDom, parentNode, override)
    Static Method

    source code 

    Adds a command override container as the next child of a parent.

    We add the following fields to the document:

      command                 override/command
      absolutePath            override/abs_path
    

    The <override> node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the OptionsConfig object.

    If override is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • override - Command override to be added to the document.

    _addHook(xmlDom, parentNode, hook)
    Static Method

    source code 

    Adds an action hook container as the next child of a parent.

    The behavior varies depending on the value of the before and after flags on the hook. If the before flag is set, it's a pre-action hook, and we'll add the following fields:

      action                  pre_action_hook/action
      command                 pre_action_hook/command
    

    If the after flag is set, it's a post-action hook, and we'll add the following fields:

      action                  post_action_hook/action
      command                 post_action_hook/command
    

    The <pre_action_hook> or <post_action_hook> node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the OptionsConfig object.

    If hook is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • hook - Command hook to be added to the document.

    _addCollectFile(xmlDom, parentNode, collectFile)
    Static Method

    source code 

    Adds a collect file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
    

    Note that for consistency with collect directory handling we'll only emit the preferred collect_mode tag.

    The <file> node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the CollectConfig object.

    If collectFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectFile - Collect file to be added to the document.

    _addCollectDir(xmlDom, parentNode, collectDir)
    Static Method

    source code 

    Adds a collect directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
      ignoreFile              dir/ignore_file
      linkDepth               dir/link_depth
      dereference             dir/dereference
      recursionLevel          dir/recursion_level
    

    Note that an original XML document might have listed the collect mode using the mode tag, since we accept both collect_mode and mode. However, here we'll only emit the preferred collect_mode tag.

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths    dir/exclude/abs_path
      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the CollectConfig object.

    If collectDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectDir - Collect directory to be added to the document.

    _addLocalPeer(xmlDom, parentNode, localPeer)
    Static Method

    source code 

    Adds a local peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      ignoreFailureMode   peer/ignore_failures
    

    Additionally, peer/type is filled in with "local", since this is a local peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If localPeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • localPeer - Purge directory to be added to the document.

    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Static Method

    source code 

    Adds a remote peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      remoteUser          peer/backup_user
      rcpCommand          peer/rcp_command
      rcpCommand          peer/rcp_command
      rshCommand          peer/rsh_command
      cbackCommand        peer/cback_command
      ignoreFailureMode   peer/ignore_failures
      managed             peer/managed
      managedActions      peer/managed_actions
    

    Additionally, peer/type is filled in with "remote", since this is a remote peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If remotePeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • remotePeer - Purge directory to be added to the document.

    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Static Method

    source code 

    Adds a purge directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      retainDays              dir/retain_days
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the PurgeConfig object.

    If purgeDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeDir - Purge directory to be added to the document.

    _addDependencies(xmlDom, parentNode, dependencies)
    Static Method

    source code 

    Adds a extended action dependencies to parent node.

    We add the following fields to the document:

      runBefore      depends/run_before
      runAfter       depends/run_after
    

    If dependencies is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • dependencies - ActionDependencies object to be added to the document

    _buildCommaSeparatedString(valueList)
    Static Method

    source code 

    Creates a comma-separated string from a list of values.

    As a special case, if valueList is None, then None will be returned.

    Parameters:
    • valueList - List of values to be placed into a string
    Returns:
    Values from valueList as a comma-separated string.

    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Static Method

    source code 

    Adds a blanking behavior container as the next child of a parent.

    We add the following fields to the document:

      blankMode    blank_behavior/mode
      blankFactor  blank_behavior/factor
    

    The <blank_behavior> node itself is created as the next child of the parent node.

    If blankBehavior is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • blankBehavior - Blanking behavior to be added to the document.

    _validateContents(self)

    source code 

    Validates configuration contents per rules discussed in module documentation.

    This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to None is validated per the rules for that section, laid out in the module documentation (above).

    Raises:
    • ValueError - If configuration is invalid.

    _validateReference(self)

    source code 

    Validates reference configuration. There are currently no reference-related validations.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateExtensions(self)

    source code 

    Validates extensions configuration.

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module, and a function.

    Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateOptions(self)

    source code 

    Validates options configuration.

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validatePeers(self)

    source code 

    Validates peers configuration per rules in _validatePeerList.

    Raises:
    • ValueError - If peers configuration is invalid.

    _validateCollect(self)

    source code 

    Validates collect configuration.

    The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Raises:
    • ValueError - If collect configuration is invalid.

    _validateStage(self)

    source code 

    Validates stage configuration.

    The target directory must be filled in, and the peers are also validated.

    Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in _validatePeerList.

    Raises:
    • ValueError - If stage configuration is invalid.

    _validateStore(self)

    source code 

    Validates store configuration.

    The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults.

    If blanking behavior is provided, then both a blanking mode and a blanking factor are required.

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None.

    Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type.

    Raises:
    • ValueError - If store configuration is invalid.

    _validatePurge(self)

    source code 

    Validates purge configuration.

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.

    Raises:
    • ValueError - If purge configuration is invalid.

    _validatePeerList(self, localPeers, remotePeers)

    source code 

    Validates the set of local and remote peers.

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Parameters:
    • localPeers - List of local peers
    • remotePeers - List of remote peers
    Raises:
    • ValueError - If stage configuration is invalid.

    Property Details [hide private]

    reference

    Reference configuration in terms of a ReferenceConfig object.

    Get Method:
    _getReference(self) - Property target used to get the reference configuration value.
    Set Method:
    _setReference(self, value) - Property target used to set the reference configuration value.

    extensions

    Extensions configuration in terms of a ExtensionsConfig object.

    Get Method:
    _getExtensions(self) - Property target used to get the extensions configuration value.
    Set Method:
    _setExtensions(self, value) - Property target used to set the extensions configuration value.

    options

    Options configuration in terms of a OptionsConfig object.

    Get Method:
    _getOptions(self) - Property target used to get the options configuration value.
    Set Method:
    _setOptions(self, value) - Property target used to set the options configuration value.

    collect

    Collect configuration in terms of a CollectConfig object.

    Get Method:
    _getCollect(self) - Property target used to get the collect configuration value.
    Set Method:
    _setCollect(self, value) - Property target used to set the collect configuration value.

    stage

    Stage configuration in terms of a StageConfig object.

    Get Method:
    _getStage(self) - Property target used to get the stage configuration value.
    Set Method:
    _setStage(self, value) - Property target used to set the stage configuration value.

    store

    Store configuration in terms of a StoreConfig object.

    Get Method:
    _getStore(self) - Property target used to get the store configuration value.
    Set Method:
    _setStore(self, value) - Property target used to set the store configuration value.

    purge

    Purge configuration in terms of a PurgeConfig object.

    Get Method:
    _getPurge(self) - Property target used to get the purge configuration value.
    Set Method:
    _setPurge(self, value) - Property target used to set the purge configuration value.

    peers

    Peers configuration in terms of a PeersConfig object.

    Get Method:
    _getPeers(self) - Property target used to get the peers configuration value.
    Set Method:
    _setPeers(self, value) - Property target used to set the peers configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox.MboxDir-class.html0000664000175000017500000010570412143054363030276 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxDir
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxDir
    [hide private]
    [frames] | no frames]

    Class MboxDir

    source code

    object --+
             |
            MboxDir
    

    Class representing mbox directory configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the <mbox> configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the MboxDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox directory.
      collectMode
    Overridden collect mode for this mbox directory.
      compressMode
    Overridden compress mode for this mbox directory.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the MboxDir class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to a mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path to the mbox directory.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox directory.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.Repository-class.html0000664000175000017500000010006712143054363032340 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.Repository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class Repository
    [hide private]
    [frames] | no frames]

    Class Repository

    source code

    object --+
             |
            Repository
    
    Known Subclasses:

    Class representing generic Subversion repository configuration..

    The following restrictions exist on data in this class:

    • The respository path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the Repository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setRepositoryPath(self, value)
    Property target used to set the repository path.
    source code
     
    _getRepositoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      repositoryPath
    Path to the repository to collect.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the Repository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setRepositoryPath(self, value)

    source code 

    Property target used to set the repository path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    repositoryPath

    Path to the repository to collect.

    Get Method:
    _getRepositoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setRepositoryPath(self, value) - Property target used to set the repository path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.FSFSRepository-class.html0000664000175000017500000003377112143054363033031 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.FSFSRepository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class FSFSRepository
    [hide private]
    [frames] | no frames]

    Class FSFSRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                FSFSRepository
    

    Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the FSFSRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the FSFSRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writer-module.html0000664000175000017500000001320012143054362026160 0ustar pronovicpronovic00000000000000 CedarBackup2.writer
    Package CedarBackup2 :: Module writer
    [hide private]
    [frames] | no frames]

    Module writer

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.util-module.html0000664000175000017500000001667112143054362026423 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    AbsolutePathList
    Diagnostics
    DirectedGraph
    ObjectTypeList
    PathResolverSingleton
    Pipe
    RegexList
    RegexMatchList
    RestrictedContentList
    UnorderedList

    Functions

    buildNormalizedPath
    calculateFileAge
    changeOwnership
    checkUnique
    convertSize
    dereferenceLink
    deriveDayOfWeek
    deviceMounted
    displayBytes
    encodePath
    executeCommand
    getFunctionReference
    getUidGid
    isRunningAsRoot
    isStartOfWeek
    mount
    nullDevice
    parseCommaSeparatedString
    removeKeys
    resolveCommand
    sanitizeEnvironment
    sortDict
    splitCommandLine
    unmount

    Variables

    BYTES_PER_GBYTE
    BYTES_PER_KBYTE
    BYTES_PER_MBYTE
    BYTES_PER_SECTOR
    DEFAULT_LANGUAGE
    HOURS_PER_DAY
    ISO_SECTOR_SIZE
    KBYTES_PER_MBYTE
    LANG_VAR
    LOCALE_VARS
    MBYTES_PER_GBYTE
    MINUTES_PER_HOUR
    MOUNT_COMMAND
    MTAB_FILE
    SECONDS_PER_DAY
    SECONDS_PER_MINUTE
    UMOUNT_COMMAND
    UNIT_BYTES
    UNIT_GBYTES
    UNIT_KBYTES
    UNIT_MBYTES
    UNIT_SECTORS
    __package__
    logger
    outputLogger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.PathResolverSingleton._Helper-class.html0000664000175000017500000001502412143054363033325 0ustar pronovicpronovic00000000000000 CedarBackup2.util.PathResolverSingleton._Helper
    Package CedarBackup2 :: Module util :: Class PathResolverSingleton :: Class _Helper
    [hide private]
    [frames] | no frames]

    Class _Helper

    source code

    Helper class to provide a singleton factory method.

    Instance Methods [hide private]
     
    __init__(self) source code
     
    __call__(self, *args, **kw) source code
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.action-pysrc.html0000664000175000017500000004475012143054365026015 0ustar pronovicpronovic00000000000000 CedarBackup2.action
    Package CedarBackup2 :: Module action
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.action

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: action.py 1022 2011-10-11 23:27:49Z pronovic $ 
    15  # Purpose  : Provides implementation of various backup-related actions. 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Provides interface backwards compatibility. 
    25   
    26  In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code 
    27  for the standard actions.  The code formerly in action.py was split into 
    28  various other files in the CedarBackup2.actions package.  This mostly-empty 
    29  file remains to preserve the Cedar Backup library interface. 
    30   
    31  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    32  """ 
    33   
    34  ######################################################################## 
    35  # Imported modules 
    36  ######################################################################## 
    37   
    38  # pylint: disable=W0611 
    39  from CedarBackup2.actions.collect import executeCollect 
    40  from CedarBackup2.actions.stage import executeStage 
    41  from CedarBackup2.actions.store import executeStore 
    42  from CedarBackup2.actions.purge import executePurge 
    43  from CedarBackup2.actions.rebuild import executeRebuild 
    44  from CedarBackup2.actions.validate import executeValidate 
    45   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.OptionsConfig-class.html0000664000175000017500000017201412143054362030522 0ustar pronovicpronovic00000000000000 CedarBackup2.config.OptionsConfig
    Package CedarBackup2 :: Module config :: Class OptionsConfig
    [hide private]
    [frames] | no frames]

    Class OptionsConfig

    source code

    object --+
             |
            OptionsConfig
    

    Class representing a Cedar Backup global options configuration.

    The options section is used to store global configuration options and defaults that can be applied to other sections.

    The following restrictions exist on data in this class:

    • The working directory must be an absolute path.
    • The starting day must be a day of the week in English, i.e. "monday", "tuesday", etc.
    • All of the other values must be non-empty strings if they are set to something other than None.
    • The overrides list must be a list of CommandOverride objects.
    • The hooks list must be a list of ActionHook objects.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    Constructor for the OptionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    addOverride(self, command, absolutePath)
    If no override currently exists for the command, add one.
    source code
     
    replaceOverride(self, command, absolutePath)
    If override currently exists for the command, replace it; otherwise add it.
    source code
     
    _setStartingDay(self, value)
    Property target used to set the starting day.
    source code
     
    _getStartingDay(self)
    Property target used to get the starting day.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setBackupUser(self, value)
    Property target used to set the backup user.
    source code
     
    _getBackupUser(self)
    Property target used to get the backup user.
    source code
     
    _setBackupGroup(self, value)
    Property target used to set the backup group.
    source code
     
    _getBackupGroup(self)
    Property target used to get the backup group.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setOverrides(self, value)
    Property target used to set the command path overrides list.
    source code
     
    _getOverrides(self)
    Property target used to get the command path overrides list.
    source code
     
    _setHooks(self, value)
    Property target used to set the pre- and post-action hooks list.
    source code
     
    _getHooks(self)
    Property target used to get the command path hooks list.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      startingDay
    Day that starts the week.
      workingDir
    Working (temporary) directory to use for backups.
      backupUser
    Effective user that backups should run as.
      backupGroup
    Effective group that backups should run as.
      rcpCommand
    Default rcp-compatible copy command for staging.
      rshCommand
    Default rsh-compatible command to use for remote shells.
      overrides
    List of configured command path overrides, if any.
      cbackCommand
    Default cback-compatible command to use on managed remote peers.
      hooks
    List of configured pre- and post-action hooks.
      managedActions
    Default set of actions that are managed on remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    (Constructor)

    source code 

    Constructor for the OptionsConfig class.

    Parameters:
    • startingDay - Day that starts the week.
    • workingDir - Working (temporary) directory to use for backups.
    • backupUser - Effective user that backups should run as.
    • backupGroup - Effective group that backups should run as.
    • rcpCommand - Default rcp-compatible copy command for staging.
    • rshCommand - Default rsh-compatible command to use for remote shells.
    • cbackCommand - Default cback-compatible command to use on managed remote peers.
    • overrides - List of configured command path overrides, if any.
    • hooks - List of configured pre- and post-action hooks.
    • managedActions - Default set of actions that are managed on remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    addOverride(self, command, absolutePath)

    source code 

    If no override currently exists for the command, add one.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    replaceOverride(self, command, absolutePath)

    source code 

    If override currently exists for the command, replace it; otherwise add it.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    _setStartingDay(self, value)

    source code 

    Property target used to set the starting day. If it is not None, the value must be a valid English day of the week, one of "monday", "tuesday", "wednesday", etc.

    Raises:
    • ValueError - If the value is not a valid day of the week.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setBackupUser(self, value)

    source code 

    Property target used to set the backup user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBackupGroup(self, value)

    source code 

    Property target used to set the backup group. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setOverrides(self, value)

    source code 

    Property target used to set the command path overrides list. Either the value must be None or each element must be a CommandOverride.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setHooks(self, value)

    source code 

    Property target used to set the pre- and post-action hooks list. Either the value must be None or each element must be an ActionHook.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    startingDay

    Day that starts the week.

    Get Method:
    _getStartingDay(self) - Property target used to get the starting day.
    Set Method:
    _setStartingDay(self, value) - Property target used to set the starting day.

    workingDir

    Working (temporary) directory to use for backups.

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    backupUser

    Effective user that backups should run as.

    Get Method:
    _getBackupUser(self) - Property target used to get the backup user.
    Set Method:
    _setBackupUser(self, value) - Property target used to set the backup user.

    backupGroup

    Effective group that backups should run as.

    Get Method:
    _getBackupGroup(self) - Property target used to get the backup group.
    Set Method:
    _setBackupGroup(self, value) - Property target used to set the backup group.

    rcpCommand

    Default rcp-compatible copy command for staging.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Default rsh-compatible command to use for remote shells.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    overrides

    List of configured command path overrides, if any.

    Get Method:
    _getOverrides(self) - Property target used to get the command path overrides list.
    Set Method:
    _setOverrides(self, value) - Property target used to set the command path overrides list.

    cbackCommand

    Default cback-compatible command to use on managed remote peers.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    hooks

    List of configured pre- and post-action hooks.

    Get Method:
    _getHooks(self) - Property target used to get the command path hooks list.
    Set Method:
    _setHooks(self, value) - Property target used to set the pre- and post-action hooks list.

    managedActions

    Default set of actions that are managed on remote peers.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.split-module.html0000664000175000017500000004457612143054362027311 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split
    Package CedarBackup2 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Module split

    source code

    Provides an extension to split up large files in staging directories.

    When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the cback.split file) will be ignored.

    This extension requires a new configuration section <split> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SplitConfig
    Class representing split configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the split backup action.
    source code
     
    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)
    Splits large files in a daily staging directory.
    source code
     
    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)
    Splits the source file into chunks of the indicated size.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.split")
      SPLIT_COMMAND = ['split']
      SPLIT_INDICATOR = 'cback.split'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the split backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)

    source code 

    Splits large files in a daily staging directory.

    Files that match INDICATOR_PATTERNS (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored. All other files are split.

    Parameters:
    • dailyDir - Daily directory to encrypt
    • sizeLimit - Size limit, in bytes
    • splitSize - Split size, in bytes
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)

    source code 

    Splits the source file into chunks of the indicated size.

    The split files will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully split.

    Parameters:
    • sourcePath - Absolute path of the source file to split
    • splitSize - Encryption mode (only "gpg" is allowed)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Raises:
    • IOError - If there is a problem accessing, splitting or removing the source file.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.util.IsoImage-class.html0000664000175000017500000021463112143054363030647 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util.IsoImage
    Package CedarBackup2 :: Package writers :: Module util :: Class IsoImage
    [hide private]
    [frames] | no frames]

    Class IsoImage

    source code

    object --+
             |
            IsoImage
    

    Represents an ISO filesystem image.

    Summary

    This object represents an ISO 9660 filesystem image. It is implemented in terms of the mkisofs program, which has been ported to many operating systems and platforms. A "sensible subset" of the mkisofs functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image.

    By default, the image is created using the Rock Ridge protocol (using the -r option to mkisofs) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default mkisofs functionality by setting the useRockRidge instance variable to False. Note, however, that this option is not well-tested.

    Where Files and Directories are Placed in the Image

    Although this class is implemented in terms of the mkisofs program, its standard "image contents" semantics are slightly different than the original mkisofs semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact.

    As an example, suppose you add the file /etc/profile to your image and you do not configure a graft point. The file /profile will be created in the image. The behavior for directories is similar. For instance, suppose that you add /etc/X11 to the image and do not configure a graft point. In this case, the directory /X11 will be created in the image, even if the original /etc/X11 directory is empty. This behavior differs from the standard mkisofs behavior!

    If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of base when adding /etc/profile and /etc/X11 to your image. In this case, the file /base/profile and the directory /base/X11 would be added to the image.

    I feel that this behavior is more consistent than the original mkisofs behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the contentsOnly parameter to the addEntry method can be used to revert to the original behavior if desired.

    Instance Methods [hide private]
     
    __init__(self, device=None, boundaries=None, graftPoint=None)
    Initializes an empty ISO image object.
    source code
     
    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)
    Adds an individual file or directory into the ISO image.
    source code
     
    getEstimatedSize(self)
    Returns the estimated size (in bytes) of the ISO image.
    source code
     
    _getEstimatedSize(self, entries)
    Returns the estimated size (in bytes) for the passed-in entries dictionary.
    source code
     
    writeImage(self, imagePath)
    Writes this image to disk using the image path.
    source code
     
    _buildGeneralArgs(self)
    Builds a list of general arguments to be passed to a mkisofs command.
    source code
     
    _buildSizeArgs(self, entries)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _buildWriteArgs(self, entries, imagePath)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _setDevice(self, value)
    Property target used to set the device value.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _setBoundaries(self, value)
    Property target used to set the boundaries tuple.
    source code
     
    _getBoundaries(self)
    Property target used to get the boundaries value.
    source code
     
    _setGraftPoint(self, value)
    Property target used to set the graft point.
    source code
     
    _getGraftPoint(self)
    Property target used to get the graft point.
    source code
     
    _setUseRockRidge(self, value)
    Property target used to set the use RockRidge flag.
    source code
     
    _getUseRockRidge(self)
    Property target used to get the use RockRidge flag.
    source code
     
    _setApplicationId(self, value)
    Property target used to set the application id.
    source code
     
    _getApplicationId(self)
    Property target used to get the application id.
    source code
     
    _setBiblioFile(self, value)
    Property target used to set the biblio file.
    source code
     
    _getBiblioFile(self)
    Property target used to get the biblio file.
    source code
     
    _setPublisherId(self, value)
    Property target used to set the publisher id.
    source code
     
    _getPublisherId(self)
    Property target used to get the publisher id.
    source code
     
    _setPreparerId(self, value)
    Property target used to set the preparer id.
    source code
     
    _getPreparerId(self)
    Property target used to get the preparer id.
    source code
     
    _setVolumeId(self, value)
    Property target used to set the volume id.
    source code
     
    _getVolumeId(self)
    Property target used to get the volume id.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _buildDirEntries(entries)
    Uses an entries dictionary to build a list of directory locations for use by mkisofs.
    source code
    Properties [hide private]
      device
    Device that image will be written to (device path or SCSI id).
      boundaries
    Session boundaries as required by mkisofs.
      graftPoint
    Default image-wide graft point (see addEntry for details).
      useRockRidge
    Indicates whether to use RockRidge (default is True).
      applicationId
    Optionally specifies the ISO header application id value.
      biblioFile
    Optionally specifies the ISO bibliographic file name.
      publisherId
    Optionally specifies the ISO header publisher id value.
      preparerId
    Optionally specifies the ISO header preparer id value.
      volumeId
    Optionally specifies the ISO header volume id value.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device=None, boundaries=None, graftPoint=None)
    (Constructor)

    source code 

    Initializes an empty ISO image object.

    Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object.

    The device and boundaries values are both required in order to write multisession discs. If either is missing or None, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a writer.MediaCapacity object.

    Parameters:
    • device (Either be a filesystem path or a SCSI address) - Name of the device that the image will be written to
    • boundaries (Tuple (last_sess_start,next_sess_start) as returned from cdrecord -msinfo, or None) - Session boundaries as required by mkisofs
    • graftPoint (String representing a graft point path (see addEntry).) - Default graft point for this page.
    Overrides: object.__init__

    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)

    source code 

    Adds an individual file or directory into the ISO image.

    The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the graftPoint parameter or instance variable.

    You can use the contentsOnly behavior to revert to the "original" mkisofs behavior for adding directories, which is to add only the items within the directory, and not the directory itself.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    • override (Boolean true/false) - Override an existing entry with the same path.
    • contentsOnly (Boolean true/false) - Add directory contents only (standard mkisofs behavior).
    Raises:
    • ValueError - If path is not a file or directory, or does not exist.
    • ValueError - If the path has already been added, and override is not set.
    • ValueError - If a path cannot be encoded properly.
    Notes:
    • Things get odd if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess.
    • An exception will be thrown if the path has already been added to the image, unless the override parameter is set to True.
    • The method graftPoints parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect at the time this method is called, so you must set the object-wide value before calling this method for the first time, or your image may not be consistent.
    • You cannot use the local graftPoint parameter to "turn off" an object-wide instance variable by setting it to None. Python's default argument functionality buys us a lot, but it can't make this method psychic. :)

    getEstimatedSize(self)

    source code 

    Returns the estimated size (in bytes) of the ISO image.

    This is implemented via the -print-size option to mkisofs, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If there are no filesystem entries in the image

    _getEstimatedSize(self, entries)

    source code 

    Returns the estimated size (in bytes) for the passed-in entries dictionary.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.

    writeImage(self, imagePath)

    source code 

    Writes this image to disk using the image path.

    Parameters:
    • imagePath (String representing a path on disk) - Path to write image out as
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _buildDirEntries(entries)
    Static Method

    source code 

    Uses an entries dictionary to build a list of directory locations for use by mkisofs.

    We build a list of entries that can be passed to mkisofs. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List of directory locations for use by mkisofs

    _buildGeneralArgs(self)

    source code 

    Builds a list of general arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildSizeArgs(self, entries)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the -print-size option), rather than an image file on disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(self, entries, imagePath)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    • imagePath (String representing a path on disk) - Path to write image out as
    Returns:
    List suitable for passing to util.executeCommand as args.

    _setDevice(self, value)

    source code 

    Property target used to set the device value. If not None, the value can be either an absolute path or a SCSI id.

    Raises:
    • ValueError - If the value is not valid

    _setBoundaries(self, value)

    source code 

    Property target used to set the boundaries tuple. If not None, the value must be a tuple of two integers.

    Raises:
    • ValueError - If the tuple values are not integers.
    • IndexError - If the tuple does not contain enough elements.

    _setGraftPoint(self, value)

    source code 

    Property target used to set the graft point. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setUseRockRidge(self, value)

    source code 

    Property target used to set the use RockRidge flag. No validations, but we normalize the value to True or False.

    _setApplicationId(self, value)

    source code 

    Property target used to set the application id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBiblioFile(self, value)

    source code 

    Property target used to set the biblio file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPublisherId(self, value)

    source code 

    Property target used to set the publisher id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPreparerId(self, value)

    source code 

    Property target used to set the preparer id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setVolumeId(self, value)

    source code 

    Property target used to set the volume id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    device

    Device that image will be written to (device path or SCSI id).

    Get Method:
    _getDevice(self) - Property target used to get the device value.
    Set Method:
    _setDevice(self, value) - Property target used to set the device value.

    boundaries

    Session boundaries as required by mkisofs.

    Get Method:
    _getBoundaries(self) - Property target used to get the boundaries value.
    Set Method:
    _setBoundaries(self, value) - Property target used to set the boundaries tuple.

    graftPoint

    Default image-wide graft point (see addEntry for details).

    Get Method:
    _getGraftPoint(self) - Property target used to get the graft point.
    Set Method:
    _setGraftPoint(self, value) - Property target used to set the graft point.

    useRockRidge

    Indicates whether to use RockRidge (default is True).

    Get Method:
    _getUseRockRidge(self) - Property target used to get the use RockRidge flag.
    Set Method:
    _setUseRockRidge(self, value) - Property target used to set the use RockRidge flag.

    applicationId

    Optionally specifies the ISO header application id value.

    Get Method:
    _getApplicationId(self) - Property target used to get the application id.
    Set Method:
    _setApplicationId(self, value) - Property target used to set the application id.

    biblioFile

    Optionally specifies the ISO bibliographic file name.

    Get Method:
    _getBiblioFile(self) - Property target used to get the biblio file.
    Set Method:
    _setBiblioFile(self, value) - Property target used to set the biblio file.

    publisherId

    Optionally specifies the ISO header publisher id value.

    Get Method:
    _getPublisherId(self) - Property target used to get the publisher id.
    Set Method:
    _setPublisherId(self, value) - Property target used to set the publisher id.

    preparerId

    Optionally specifies the ISO header preparer id value.

    Get Method:
    _getPreparerId(self) - Property target used to get the preparer id.
    Set Method:
    _setPreparerId(self, value) - Property target used to set the preparer id.

    volumeId

    Optionally specifies the ISO header volume id value.

    Get Method:
    _getVolumeId(self) - Property target used to get the volume id.
    Set Method:
    _setVolumeId(self, value) - Property target used to set the volume id.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions-module.html0000664000175000017500000002051512143054362026313 0ustar pronovicpronovic00000000000000 CedarBackup2.actions
    Package CedarBackup2 :: Package actions
    [hide private]
    [frames] | no frames]

    Package actions

    source code

    Cedar Backup actions.

    This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate).

    The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants.

    All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem.BackupFileList-class.html0000664000175000017500000014275612143054363031534 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.BackupFileList
    Package CedarBackup2 :: Module filesystem :: Class BackupFileList
    [hide private]
    [frames] | no frames]

    Class BackupFileList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    BackupFileList
    

    List of files to be backed up.

    A BackupFileList is a FilesystemList containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by FilesystemList, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    totalSize(self)
    Returns the total size among all files in the list.
    source code
     
    generateSizeMap(self)
    Generates a mapping from file to file size in bytes.
    source code
     
    generateDigestMap(self, stripPrefix=None)
    Generates a mapping from file to file digest.
    source code
     
    generateFitted(self, capacity, algorithm='worst_fit')
    Generates a list of items that fit in the indicated capacity.
    source code
     
    generateTarfile(self, path, mode='tar', ignore=False, flat=False)
    Creates a tar file containing the files in the list.
    source code
     
    removeUnchanged(self, digestMap, captureDigest=False)
    Removes unchanged entries from the list.
    source code
     
    generateSpan(self, capacity, algorithm='worst_fit')
    Splits the list of items into sub-lists that fit in a given capacity.
    source code
     
    _getKnapsackTable(self, capacity=None)
    Converts the list into the form needed by the knapsack algorithms.
    source code

    Inherited from FilesystemList: addDirContents, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _generateDigest(path)
    Generates an SHA digest for a given file on disk.
    source code
     
    _getKnapsackFunction(algorithm)
    Returns a reference to the function associated with an algorithm name.
    source code
    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDir(self, path)

    source code 

    Adds a directory to the list.

    Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added.

    This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDir

    totalSize(self)

    source code 

    Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored.

    Returns:
    Total size, in bytes

    generateSizeMap(self)

    source code 

    Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored.

    Returns:
    Dictionary mapping file to file size

    generateDigestMap(self, stripPrefix=None)

    source code 

    Generates a mapping from file to file digest.

    Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped.

    Entries which do not exist on disk are ignored.

    Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense.

    If stripPrefix is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another.

    Parameters:
    • stripPrefix (String with any contents) - Common prefix to be stripped from paths
    Returns:
    Dictionary mapping file to digest value

    See Also: removeUnchanged

    generateFitted(self, capacity, algorithm='worst_fit')

    source code 

    Generates a list of items that fit in the indicated capacity.

    Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    Copy of list with total size no larger than indicated capacity
    Raises:
    • ValueError - If the algorithm is invalid.

    generateTarfile(self, path, mode='tar', ignore=False, flat=False)

    source code 

    Creates a tar file containing the files in the list.

    By default, this method will create uncompressed tar files. If you pass in mode 'targz', then it will create gzipped tar files, and if you pass in mode 'tarbz2', then it will create bzipped tar files.

    The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard".

    If you pass in flat=True, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file /tmp/something/whatever.txt would be added as just whatever.txt.

    By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call removeInvalid() and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built).

    If you want to, you can pass in ignore=True, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself).

    We'll always attempt to remove the tarfile from disk if an exception will be thrown.

    Parameters:
    • path (String representing a path on disk) - Path of tar file to create on disk
    • mode (One of either 'tar', 'targz' or 'tarbz2') - Tar creation mode
    • ignore (Boolean) - Indicates whether to ignore certain errors.
    • flat (Boolean) - Creates "flat" archive by putting all items in root
    Raises:
    • ValueError - If mode is not valid
    • ValueError - If list is empty
    • ValueError - If the path could not be encoded properly.
    • TarError - If there is a problem creating the tar file
    Notes:
    • No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories.
    • The Python tarfile module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives.

    removeUnchanged(self, digestMap, captureDigest=False)

    source code 

    Removes unchanged entries from the list.

    This method relies on a digest map as returned from generateDigestMap. For each entry in digestMap, if the entry also exists in the current list and the entry in the current list has the same digest value as in the map, the entry in the current list will be removed.

    This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from generateDigestMap at some point in time (perhaps the beginning of the week), and will save off that map using pickle or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map.

    If captureDigest is passed-in as True, then digest information will be captured for the entire list before the removal step occurs using the same rules as in generateDigestMap. The check will involve a lookup into the complete digest map.

    If captureDigest is passed in as False, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk.

    The return value varies depending on captureDigest, as well. To preserve backwards compatibility, if captureDigest is False, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of (entries removed, digest map). The returned digest map will be in exactly the form returned by generateDigestMap.

    Parameters:
    • digestMap (Map as returned from generateDigestMap.) - Dictionary mapping file name to digest value.
    • captureDigest (Boolean) - Indicates that digest information should be captured.
    Returns:
    Number of entries removed

    Note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller.

    _generateDigest(path)
    Static Method

    source code 

    Generates an SHA digest for a given file on disk.

    The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:

      sha.new(open(path).read()).hexdigest()
    

    Not surprisingly, this isn't an optimal solution. The Simple file hashing Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the update() method of the various Python hashing algorithms.

    In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up.

    Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable.

    Parameters:
    • path - Path to generate digest for.
    Returns:
    ASCII-safe SHA digest for the file.
    Raises:
    • OSError - If the file cannot be opened.

    generateSpan(self, capacity, algorithm='worst_fit')

    source code 

    Splits the list of items into sub-lists that fit in a given capacity.

    Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    List of SpanItem objects.
    Raises:
    • ValueError - If the algorithm is invalid.
    • ValueError - If it's not possible to fit some items

    Note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised.

    _getKnapsackTable(self, capacity=None)

    source code 

    Converts the list into the form needed by the knapsack algorithms.

    Returns:
    Dictionary mapping file name to tuple of (file path, file size).

    _getKnapsackFunction(algorithm)
    Static Method

    source code 

    Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit"

    Parameters:
    • algorithm - Name of the algorithm
    Returns:
    Reference to knapsack function
    Raises:
    • ValueError - If the algorithm name is unknown.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.peer-module.html0000664000175000017500000000405112143054362026366 0ustar pronovicpronovic00000000000000 peer

    Module peer


    Classes

    LocalPeer
    RemotePeer

    Variables

    DEF_CBACK_COMMAND
    DEF_COLLECT_INDICATOR
    DEF_RCP_COMMAND
    DEF_RSH_COMMAND
    DEF_STAGE_INDICATOR
    SU_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.StageConfig-class.html0000664000175000017500000007166012143054362030137 0ustar pronovicpronovic00000000000000 CedarBackup2.config.StageConfig
    Package CedarBackup2 :: Module config :: Class StageConfig
    [hide private]
    [frames] | no frames]

    Class StageConfig

    source code

    object --+
             |
            StageConfig
    

    Class representing a Cedar Backup stage configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path
    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    Constructor for the StageConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to stage files into, by peer name.
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the StageConfig class.

    Parameters:
    • targetDir - Directory to stage files into, by peer name.
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    targetDir

    Directory to stage files into, by peer name.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup2-2.22.0/doc/interface/class-tree.html0000664000175000017500000005456712143054362023223 0ustar pronovicpronovic00000000000000 Class Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Class Hierarchy

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config-pysrc.html0000664000175000017500000730021312143054364026000 0ustar pronovicpronovic00000000000000 CedarBackup2.config
    Package CedarBackup2 :: Module config
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.config

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Cedar Backup, release 2 
      30  # Revision : $Id: config.py 1041 2013-05-10 02:05:13Z pronovic $ 
      31  # Purpose  : Provides configuration-related objects. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides configuration-related objects. 
      41   
      42  Summary 
      43  ======= 
      44   
      45     Cedar Backup stores all of its configuration in an XML document typically 
      46     called C{cback.conf}.  The standard location for this document is in 
      47     C{/etc}, but users can specify a different location if they want to.   
      48   
      49     The C{Config} class is a Python object representation of a Cedar Backup XML 
      50     configuration file.  The representation is two-way: XML data can be used to 
      51     create a C{Config} object, and then changes to the object can be propogated 
      52     back to disk.  A C{Config} object can even be used to create a configuration 
      53     file from scratch programmatically. 
      54   
      55     The C{Config} class is intended to be the only Python-language interface to 
      56     Cedar Backup configuration on disk.  Cedar Backup will use the class as its 
      57     internal representation of configuration, and applications external to Cedar 
      58     Backup itself (such as a hypothetical third-party configuration tool written 
      59     in Python or a third party extension module) should also use the class when 
      60     they need to read and write configuration files. 
      61   
      62  Backwards Compatibility 
      63  ======================= 
      64   
      65     The configuration file format has changed between Cedar Backup 1.x and Cedar 
      66     Backup 2.x.  Any Cedar Backup 1.x configuration file is also a valid Cedar 
      67     Backup 2.x configuration file.  However, it doesn't work to go the other 
      68     direction, as the 2.x configuration files contains additional configuration  
      69     is not accepted by older versions of the software.   
      70   
      71  XML Configuration Structure 
      72  =========================== 
      73   
      74     A C{Config} object can either be created "empty", or can be created based on 
      75     XML input (either in the form of a string or read in from a file on disk). 
      76     Generally speaking, the XML input I{must} result in a C{Config} object which 
      77     passes the validations laid out below in the I{Validation} section.   
      78   
      79     An XML configuration file is composed of seven sections: 
      80   
      81        - I{reference}: specifies reference information about the file (author, revision, etc) 
      82        - I{extensions}: specifies mappings to Cedar Backup extensions (external code) 
      83        - I{options}: specifies global configuration options 
      84        - I{peers}: specifies the set of peers in a master's backup pool 
      85        - I{collect}: specifies configuration related to the collect action 
      86        - I{stage}: specifies configuration related to the stage action 
      87        - I{store}: specifies configuration related to the store action 
      88        - I{purge}: specifies configuration related to the purge action 
      89   
      90     Each section is represented by an class in this module, and then the overall 
      91     C{Config} class is a composition of the various other classes.   
      92   
      93     Any configuration section that is missing in the XML document (or has not 
      94     been filled into an "empty" document) will just be set to C{None} in the 
      95     object representation.  The same goes for individual fields within each 
      96     configuration section.  Keep in mind that the document might not be 
      97     completely valid if some sections or fields aren't filled in - but that 
      98     won't matter until validation takes place (see the I{Validation} section 
      99     below). 
     100   
     101  Unicode vs. String Data 
     102  ======================= 
     103   
     104     By default, all string data that comes out of XML documents in Python is 
     105     unicode data (i.e. C{u"whatever"}).  This is fine for many things, but when 
     106     it comes to filesystem paths, it can cause us some problems.  We really want 
     107     strings to be encoded in the filesystem encoding rather than being unicode. 
     108     So, most elements in configuration which represent filesystem paths are 
     109     coverted to plain strings using L{util.encodePath}.  The main exception is 
     110     the various C{absoluteExcludePath} and C{relativeExcludePath} lists.  These 
     111     are I{not} converted, because they are generally only used for filtering, 
     112     not for filesystem operations. 
     113   
     114  Validation  
     115  ========== 
     116   
     117     There are two main levels of validation in the C{Config} class and its 
     118     children.  The first is field-level validation.  Field-level validation 
     119     comes into play when a given field in an object is assigned to or updated. 
     120     We use Python's C{property} functionality to enforce specific validations on 
     121     field values, and in some places we even use customized list classes to 
     122     enforce validations on list members.  You should expect to catch a 
     123     C{ValueError} exception when making assignments to configuration class 
     124     fields. 
     125   
     126     The second level of validation is post-completion validation.  Certain 
     127     validations don't make sense until a document is fully "complete".  We don't 
     128     want these validations to apply all of the time, because it would make 
     129     building up a document from scratch a real pain.  For instance, we might 
     130     have to do things in the right order to keep from throwing exceptions, etc. 
     131   
     132     All of these post-completion validations are encapsulated in the 
     133     L{Config.validate} method.  This method can be called at any time by a 
     134     client, and will always be called immediately after creating a C{Config} 
     135     object from XML data and before exporting a C{Config} object to XML.  This 
     136     way, we get decent ease-of-use but we also don't accept or emit invalid 
     137     configuration files. 
     138   
     139     The L{Config.validate} implementation actually takes two passes to 
     140     completely validate a configuration document.  The first pass at validation 
     141     is to ensure that the proper sections are filled into the document.  There 
     142     are default requirements, but the caller has the opportunity to override 
     143     these defaults. 
     144   
     145     The second pass at validation ensures that any filled-in section contains 
     146     valid data.  Any section which is not set to C{None} is validated according 
     147     to the rules for that section (see below). 
     148   
     149     I{Reference Validations} 
     150   
     151     No validations. 
     152   
     153     I{Extensions Validations} 
     154   
     155     The list of actions may be either C{None} or an empty list C{[]} if desired. 
     156     Each extended action must include a name, a module and a function.  Then, an 
     157     extended action must include either an index or dependency information. 
     158     Which one is required depends on which order mode is configured. 
     159   
     160     I{Options Validations} 
     161   
     162     All fields must be filled in except the rsh command.  The rcp and rsh 
     163     commands are used as default values for all remote peers.  Remote peers can 
     164     also rely on the backup user as the default remote user name if they choose. 
     165   
     166     I{Peers Validations} 
     167   
     168     Local peers must be completely filled in, including both name and collect 
     169     directory.  Remote peers must also fill in the name and collect directory, 
     170     but can leave the remote user and rcp command unset.  In this case, the 
     171     remote user is assumed to match the backup user from the options section and 
     172     rcp command is taken directly from the options section. 
     173   
     174     I{Collect Validations} 
     175   
     176     The target directory must be filled in.  The collect mode, archive mode and 
     177     ignore file are all optional.  The list of absolute paths to exclude and 
     178     patterns to exclude may be either C{None} or an empty list C{[]} if desired. 
     179   
     180     Each collect directory entry must contain an absolute path to collect, and 
     181     then must either be able to take collect mode, archive mode and ignore file 
     182     configuration from the parent C{CollectConfig} object, or must set each 
     183     value on its own.  The list of absolute paths to exclude, relative paths to 
     184     exclude and patterns to exclude may be either C{None} or an empty list C{[]} 
     185     if desired.  Any list of absolute paths to exclude or patterns to exclude 
     186     will be combined with the same list in the C{CollectConfig} object to make 
     187     the complete list for a given directory. 
     188   
     189     I{Stage Validations} 
     190   
     191     The target directory must be filled in.  There must be at least one peer 
     192     (remote or local) between the two lists of peers.  A list with no entries 
     193     can be either C{None} or an empty list C{[]} if desired. 
     194   
     195     If a set of peers is provided, this configuration completely overrides 
     196     configuration in the peers configuration section, and the same validations 
     197     apply. 
     198   
     199     I{Store Validations} 
     200   
     201     The device type and drive speed are optional, and all other values are 
     202     required (missing booleans will be set to defaults, which is OK). 
     203   
     204     The image writer functionality in the C{writer} module is supposed to be 
     205     able to handle a device speed of C{None}.  Any caller which needs a "real" 
     206     (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, 
     207     which is guaranteed to be sensible. 
     208   
     209     I{Purge Validations} 
     210   
     211     The list of purge directories may be either C{None} or an empty list C{[]} 
     212     if desired.  All purge directories must contain a path and a retain days 
     213     value. 
     214   
     215  @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, 
     216         ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer,  
     217         RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, 
     218         CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, 
     219         DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE,  
     220         VALID_DEVICE_TYPES, VALID_MEDIA_TYPES,  
     221         VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, 
     222         VALID_ORDER_MODES 
     223   
     224  @var DEFAULT_DEVICE_TYPE: The default device type. 
     225  @var DEFAULT_MEDIA_TYPE: The default media type. 
     226  @var VALID_DEVICE_TYPES: List of valid device types. 
     227  @var VALID_MEDIA_TYPES: List of valid media types. 
     228  @var VALID_COLLECT_MODES: List of valid collect modes. 
     229  @var VALID_COMPRESS_MODES: List of valid compress modes. 
     230  @var VALID_ARCHIVE_MODES: List of valid archive modes. 
     231  @var VALID_ORDER_MODES: List of valid extension order modes. 
     232   
     233  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     234  """ 
     235   
     236  ######################################################################## 
     237  # Imported modules 
     238  ######################################################################## 
     239   
     240  # System modules 
     241  import os 
     242  import re 
     243  import logging 
     244   
     245  # Cedar Backup modules 
     246  from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed 
     247  from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString 
     248  from CedarBackup2.util import RegexMatchList, RegexList, encodePath, checkUnique 
     249  from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES 
     250  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild 
     251  from CedarBackup2.xmlutil import readStringList, readString, readInteger, readBoolean 
     252  from CedarBackup2.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode 
     253  from CedarBackup2.xmlutil import createInputDom, createOutputDom, serializeDom 
     254   
     255   
     256  ######################################################################## 
     257  # Module-wide constants and variables 
     258  ######################################################################## 
     259   
     260  logger = logging.getLogger("CedarBackup2.log.config") 
     261   
     262  DEFAULT_DEVICE_TYPE   = "cdwriter" 
     263  DEFAULT_MEDIA_TYPE    = "cdrw-74" 
     264   
     265  VALID_DEVICE_TYPES    = [ "cdwriter", "dvdwriter", ] 
     266  VALID_CD_MEDIA_TYPES  = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] 
     267  VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] 
     268  VALID_MEDIA_TYPES     = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES 
     269  VALID_COLLECT_MODES   = [ "daily", "weekly", "incr", ] 
     270  VALID_ARCHIVE_MODES   = [ "tar", "targz", "tarbz2", ] 
     271  VALID_COMPRESS_MODES  = [ "none", "gzip", "bzip2", ] 
     272  VALID_ORDER_MODES     = [ "index", "dependency", ] 
     273  VALID_BLANK_MODES     = [ "daily", "weekly", ] 
     274  VALID_BYTE_UNITS      = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ]  
     275  VALID_FAILURE_MODES   = [ "none", "all", "daily", "weekly", ] 
     276   
     277  REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] 
     278   
     279  ACTION_NAME_REGEX     = r"^[a-z0-9]*$" 
    
    280 281 282 ######################################################################## 283 # ByteQuantity class definition 284 ######################################################################## 285 286 -class ByteQuantity(object):
    287 288 """ 289 Class representing a byte quantity. 290 291 A byte quantity has both a quantity and a byte-related unit. Units are 292 maintained using the constants from util.py. 293 294 The quantity is maintained internally as a string so that issues of 295 precision can be avoided. It really isn't possible to store a floating 296 point number here while being able to losslessly translate back and forth 297 between XML and object representations. (Perhaps the Python 2.4 Decimal 298 class would have been an option, but I originally wanted to stay compatible 299 with Python 2.3.) 300 301 Even though the quantity is maintained as a string, the string must be in a 302 valid floating point positive number. Technically, any floating point 303 string format supported by Python is allowble. However, it does not make 304 sense to have a negative quantity of bytes in this context. 305 306 @sort: __init__, __repr__, __str__, __cmp__, quantity, units 307 """ 308
    309 - def __init__(self, quantity=None, units=None):
    310 """ 311 Constructor for the C{ByteQuantity} class. 312 313 @param quantity: Quantity of bytes, as string ("1.25") 314 @param units: Unit of bytes, one of VALID_BYTE_UNITS 315 316 @raise ValueError: If one of the values is invalid. 317 """ 318 self._quantity = None 319 self._units = None 320 self.quantity = quantity 321 self.units = units
    322
    323 - def __repr__(self):
    324 """ 325 Official string representation for class instance. 326 """ 327 return "ByteQuantity(%s, %s)" % (self.quantity, self.units)
    328
    329 - def __str__(self):
    330 """ 331 Informal string representation for class instance. 332 """ 333 return self.__repr__()
    334
    335 - def __cmp__(self, other):
    336 """ 337 Definition of equals operator for this class. 338 Lists within this class are "unordered" for equality comparisons. 339 @param other: Other object to compare to. 340 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 341 """ 342 if other is None: 343 return 1 344 if self.quantity != other.quantity: 345 if self.quantity < other.quantity: 346 return -1 347 else: 348 return 1 349 if self.units != other.units: 350 if self.units < other.units: 351 return -1 352 else: 353 return 1 354 return 0
    355
    356 - def _setQuantity(self, value):
    357 """ 358 Property target used to set the quantity 359 The value must be a non-empty string if it is not C{None}. 360 @raise ValueError: If the value is an empty string. 361 @raise ValueError: If the value is not a valid floating point number 362 @raise ValueError: If the value is less than zero 363 """ 364 if value is not None: 365 if len(value) < 1: 366 raise ValueError("Quantity must be a non-empty string.") 367 floatValue = float(value) 368 if floatValue < 0.0: 369 raise ValueError("Quantity cannot be negative.") 370 self._quantity = value # keep around string
    371
    372 - def _getQuantity(self):
    373 """ 374 Property target used to get the quantity. 375 """ 376 return self._quantity
    377
    378 - def _setUnits(self, value):
    379 """ 380 Property target used to set the units value. 381 If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. 382 @raise ValueError: If the value is not valid. 383 """ 384 if value is not None: 385 if value not in VALID_BYTE_UNITS: 386 raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) 387 self._units = value
    388
    389 - def _getUnits(self):
    390 """ 391 Property target used to get the units value. 392 """ 393 return self._units
    394
    395 - def _getBytes(self):
    396 """ 397 Property target used to return the byte quantity as a floating point number. 398 If there is no quantity set, then a value of 0.0 is returned. 399 """ 400 if self.quantity is not None and self.units is not None: 401 return convertSize(self.quantity, self.units, UNIT_BYTES) 402 return 0.0
    403 404 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") 405 units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") 406 bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.")
    407
    408 409 ######################################################################## 410 # ActionDependencies class definition 411 ######################################################################## 412 413 -class ActionDependencies(object):
    414 415 """ 416 Class representing dependencies associated with an extended action. 417 418 Execution ordering for extended actions is done in one of two ways: either by using 419 index values (lower index gets run first) or by having the extended action specify 420 dependencies in terms of other named actions. This class encapsulates the dependency 421 information for an extended action. 422 423 The following restrictions exist on data in this class: 424 425 - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} 426 427 @sort: __init__, __repr__, __str__, __cmp__, beforeList, afterList 428 """ 429
    430 - def __init__(self, beforeList=None, afterList=None):
    431 """ 432 Constructor for the C{ActionDependencies} class. 433 434 @param beforeList: List of named actions that this action must be run before 435 @param afterList: List of named actions that this action must be run after 436 437 @raise ValueError: If one of the values is invalid. 438 """ 439 self._beforeList = None 440 self._afterList = None 441 self.beforeList = beforeList 442 self.afterList = afterList
    443
    444 - def __repr__(self):
    445 """ 446 Official string representation for class instance. 447 """ 448 return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList)
    449
    450 - def __str__(self):
    451 """ 452 Informal string representation for class instance. 453 """ 454 return self.__repr__()
    455
    456 - def __cmp__(self, other):
    457 """ 458 Definition of equals operator for this class. 459 @param other: Other object to compare to. 460 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 461 """ 462 if other is None: 463 return 1 464 if self.beforeList != other.beforeList: 465 if self.beforeList < other.beforeList: 466 return -1 467 else: 468 return 1 469 if self.afterList != other.afterList: 470 if self.afterList < other.afterList: 471 return -1 472 else: 473 return 1 474 return 0
    475
    476 - def _setBeforeList(self, value):
    477 """ 478 Property target used to set the "run before" list. 479 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 480 @raise ValueError: If the value does not match the regular expression. 481 """ 482 if value is None: 483 self._beforeList = None 484 else: 485 try: 486 saved = self._beforeList 487 self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 488 self._beforeList.extend(value) 489 except Exception, e: 490 self._beforeList = saved 491 raise e
    492
    493 - def _getBeforeList(self):
    494 """ 495 Property target used to get the "run before" list. 496 """ 497 return self._beforeList
    498
    499 - def _setAfterList(self, value):
    500 """ 501 Property target used to set the "run after" list. 502 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 503 @raise ValueError: If the value does not match the regular expression. 504 """ 505 if value is None: 506 self._afterList = None 507 else: 508 try: 509 saved = self._afterList 510 self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 511 self._afterList.extend(value) 512 except Exception, e: 513 self._afterList = saved 514 raise e
    515
    516 - def _getAfterList(self):
    517 """ 518 Property target used to get the "run after" list. 519 """ 520 return self._afterList
    521 522 beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") 523 afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.")
    524
    525 526 ######################################################################## 527 # ActionHook class definition 528 ######################################################################## 529 530 -class ActionHook(object):
    531 532 """ 533 Class representing a hook associated with an action. 534 535 A hook associated with an action is a shell command to be executed either 536 before or after a named action is executed. 537 538 The following restrictions exist on data in this class: 539 540 - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} 541 - The shell command must be a non-empty string. 542 543 The internal C{before} and C{after} instance variables are always set to 544 False in this parent class. 545 546 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 547 """ 548
    549 - def __init__(self, action=None, command=None):
    550 """ 551 Constructor for the C{ActionHook} class. 552 553 @param action: Action this hook is associated with 554 @param command: Shell command to execute 555 556 @raise ValueError: If one of the values is invalid. 557 """ 558 self._action = None 559 self._command = None 560 self._before = False 561 self._after = False 562 self.action = action 563 self.command = command
    564
    565 - def __repr__(self):
    566 """ 567 Official string representation for class instance. 568 """ 569 return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    570
    571 - def __str__(self):
    572 """ 573 Informal string representation for class instance. 574 """ 575 return self.__repr__()
    576
    577 - def __cmp__(self, other):
    578 """ 579 Definition of equals operator for this class. 580 @param other: Other object to compare to. 581 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 582 """ 583 if other is None: 584 return 1 585 if self.action != other.action: 586 if self.action < other.action: 587 return -1 588 else: 589 return 1 590 if self.command != other.command: 591 if self.command < other.command: 592 return -1 593 else: 594 return 1 595 if self.before != other.before: 596 if self.before < other.before: 597 return -1 598 else: 599 return 1 600 if self.after != other.after: 601 if self.after < other.after: 602 return -1 603 else: 604 return 1 605 return 0
    606
    607 - def _setAction(self, value):
    608 """ 609 Property target used to set the action name. 610 The value must be a non-empty string if it is not C{None}. 611 It must also consist only of lower-case letters and digits. 612 @raise ValueError: If the value is an empty string. 613 """ 614 pattern = re.compile(ACTION_NAME_REGEX) 615 if value is not None: 616 if len(value) < 1: 617 raise ValueError("The action name must be a non-empty string.") 618 if not pattern.search(value): 619 raise ValueError("The action name must consist of only lower-case letters and digits.") 620 self._action = value
    621
    622 - def _getAction(self):
    623 """ 624 Property target used to get the action name. 625 """ 626 return self._action
    627
    628 - def _setCommand(self, value):
    629 """ 630 Property target used to set the command. 631 The value must be a non-empty string if it is not C{None}. 632 @raise ValueError: If the value is an empty string. 633 """ 634 if value is not None: 635 if len(value) < 1: 636 raise ValueError("The command must be a non-empty string.") 637 self._command = value
    638
    639 - def _getCommand(self):
    640 """ 641 Property target used to get the command. 642 """ 643 return self._command
    644
    645 - def _getBefore(self):
    646 """ 647 Property target used to get the before flag. 648 """ 649 return self._before
    650
    651 - def _getAfter(self):
    652 """ 653 Property target used to get the after flag. 654 """ 655 return self._after
    656 657 action = property(_getAction, _setAction, None, "Action this hook is associated with.") 658 command = property(_getCommand, _setCommand, None, "Shell command to execute.") 659 before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") 660 after = property(_getAfter, None, None, "Indicates whether command should be executed after action.")
    661
    662 -class PreActionHook(ActionHook):
    663 664 """ 665 Class representing a pre-action hook associated with an action. 666 667 A hook associated with an action is a shell command to be executed either 668 before or after a named action is executed. In this case, a pre-action hook 669 is executed before the named action. 670 671 The following restrictions exist on data in this class: 672 673 - The action name must be a non-empty string consisting of lower-case letters and digits. 674 - The shell command must be a non-empty string. 675 676 The internal C{before} instance variable is always set to True in this 677 class. 678 679 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 680 """ 681
    682 - def __init__(self, action=None, command=None):
    683 """ 684 Constructor for the C{PreActionHook} class. 685 686 @param action: Action this hook is associated with 687 @param command: Shell command to execute 688 689 @raise ValueError: If one of the values is invalid. 690 """ 691 ActionHook.__init__(self, action, command) 692 self._before = True
    693
    694 - def __repr__(self):
    695 """ 696 Official string representation for class instance. 697 """ 698 return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    699
    700 -class PostActionHook(ActionHook):
    701 702 """ 703 Class representing a pre-action hook associated with an action. 704 705 A hook associated with an action is a shell command to be executed either 706 before or after a named action is executed. In this case, a post-action hook 707 is executed after the named action. 708 709 The following restrictions exist on data in this class: 710 711 - The action name must be a non-empty string consisting of lower-case letters and digits. 712 - The shell command must be a non-empty string. 713 714 The internal C{before} instance variable is always set to True in this 715 class. 716 717 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 718 """ 719
    720 - def __init__(self, action=None, command=None):
    721 """ 722 Constructor for the C{PostActionHook} class. 723 724 @param action: Action this hook is associated with 725 @param command: Shell command to execute 726 727 @raise ValueError: If one of the values is invalid. 728 """ 729 ActionHook.__init__(self, action, command) 730 self._after = True
    731
    732 - def __repr__(self):
    733 """ 734 Official string representation for class instance. 735 """ 736 return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    737
    738 739 ######################################################################## 740 # BlankBehavior class definition 741 ######################################################################## 742 743 -class BlankBehavior(object):
    744 745 """ 746 Class representing optimized store-action media blanking behavior. 747 748 The following restrictions exist on data in this class: 749 750 - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} 751 - The blanking factor must be a positive floating point number 752 753 @sort: __init__, __repr__, __str__, __cmp__, blankMode, blankFactor 754 """ 755
    756 - def __init__(self, blankMode=None, blankFactor=None):
    757 """ 758 Constructor for the C{BlankBehavior} class. 759 760 @param blankMode: Blanking mode 761 @param blankFactor: Blanking factor 762 763 @raise ValueError: If one of the values is invalid. 764 """ 765 self._blankMode = None 766 self._blankFactor = None 767 self.blankMode = blankMode 768 self.blankFactor = blankFactor
    769
    770 - def __repr__(self):
    771 """ 772 Official string representation for class instance. 773 """ 774 return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor)
    775
    776 - def __str__(self):
    777 """ 778 Informal string representation for class instance. 779 """ 780 return self.__repr__()
    781
    782 - def __cmp__(self, other):
    783 """ 784 Definition of equals operator for this class. 785 @param other: Other object to compare to. 786 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 787 """ 788 if other is None: 789 return 1 790 if self.blankMode != other.blankMode: 791 if self.blankMode < other.blankMode: 792 return -1 793 else: 794 return 1 795 if self.blankFactor != other.blankFactor: 796 if self.blankFactor < other.blankFactor: 797 return -1 798 else: 799 return 1 800 return 0
    801
    802 - def _setBlankMode(self, value):
    803 """ 804 Property target used to set the blanking mode. 805 The value must be one of L{VALID_BLANK_MODES}. 806 @raise ValueError: If the value is not valid. 807 """ 808 if value is not None: 809 if value not in VALID_BLANK_MODES: 810 raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) 811 self._blankMode = value
    812
    813 - def _getBlankMode(self):
    814 """ 815 Property target used to get the blanking mode. 816 """ 817 return self._blankMode
    818
    819 - def _setBlankFactor(self, value):
    820 """ 821 Property target used to set the blanking factor. 822 The value must be a non-empty string if it is not C{None}. 823 @raise ValueError: If the value is an empty string. 824 @raise ValueError: If the value is not a valid floating point number 825 @raise ValueError: If the value is less than zero 826 """ 827 if value is not None: 828 if len(value) < 1: 829 raise ValueError("Blanking factor must be a non-empty string.") 830 floatValue = float(value) 831 if floatValue < 0.0: 832 raise ValueError("Blanking factor cannot be negative.") 833 self._blankFactor = value # keep around string
    834
    835 - def _getBlankFactor(self):
    836 """ 837 Property target used to get the blanking factor. 838 """ 839 return self._blankFactor
    840 841 blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") 842 blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor")
    843
    844 845 ######################################################################## 846 # ExtendedAction class definition 847 ######################################################################## 848 849 -class ExtendedAction(object):
    850 851 """ 852 Class representing an extended action. 853 854 Essentially, an extended action needs to allow the following to happen:: 855 856 exec("from %s import %s" % (module, function)) 857 exec("%s(action, configPath")" % function) 858 859 The following restrictions exist on data in this class: 860 861 - The action name must be a non-empty string consisting of lower-case letters and digits. 862 - The module must be a non-empty string and a valid Python identifier. 863 - The function must be an on-empty string and a valid Python identifier. 864 - If set, the index must be a positive integer. 865 - If set, the dependencies attribute must be an C{ActionDependencies} object. 866 867 @sort: __init__, __repr__, __str__, __cmp__, name, module, function, index, dependencies 868 """ 869
    870 - def __init__(self, name=None, module=None, function=None, index=None, dependencies=None):
    871 """ 872 Constructor for the C{ExtendedAction} class. 873 874 @param name: Name of the extended action 875 @param module: Name of the module containing the extended action function 876 @param function: Name of the extended action function 877 @param index: Index of action, used for execution ordering 878 @param dependencies: Dependencies for action, used for execution ordering 879 880 @raise ValueError: If one of the values is invalid. 881 """ 882 self._name = None 883 self._module = None 884 self._function = None 885 self._index = None 886 self._dependencies = None 887 self.name = name 888 self.module = module 889 self.function = function 890 self.index = index 891 self.dependencies = dependencies
    892
    893 - def __repr__(self):
    894 """ 895 Official string representation for class instance. 896 """ 897 return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies)
    898
    899 - def __str__(self):
    900 """ 901 Informal string representation for class instance. 902 """ 903 return self.__repr__()
    904
    905 - def __cmp__(self, other):
    906 """ 907 Definition of equals operator for this class. 908 @param other: Other object to compare to. 909 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 910 """ 911 if other is None: 912 return 1 913 if self.name != other.name: 914 if self.name < other.name: 915 return -1 916 else: 917 return 1 918 if self.module != other.module: 919 if self.module < other.module: 920 return -1 921 else: 922 return 1 923 if self.function != other.function: 924 if self.function < other.function: 925 return -1 926 else: 927 return 1 928 if self.index != other.index: 929 if self.index < other.index: 930 return -1 931 else: 932 return 1 933 if self.dependencies != other.dependencies: 934 if self.dependencies < other.dependencies: 935 return -1 936 else: 937 return 1 938 return 0
    939
    940 - def _setName(self, value):
    941 """ 942 Property target used to set the action name. 943 The value must be a non-empty string if it is not C{None}. 944 It must also consist only of lower-case letters and digits. 945 @raise ValueError: If the value is an empty string. 946 """ 947 pattern = re.compile(ACTION_NAME_REGEX) 948 if value is not None: 949 if len(value) < 1: 950 raise ValueError("The action name must be a non-empty string.") 951 if not pattern.search(value): 952 raise ValueError("The action name must consist of only lower-case letters and digits.") 953 self._name = value
    954
    955 - def _getName(self):
    956 """ 957 Property target used to get the action name. 958 """ 959 return self._name
    960
    961 - def _setModule(self, value):
    962 """ 963 Property target used to set the module name. 964 The value must be a non-empty string if it is not C{None}. 965 It must also be a valid Python identifier. 966 @raise ValueError: If the value is an empty string. 967 """ 968 pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") 969 if value is not None: 970 if len(value) < 1: 971 raise ValueError("The module name must be a non-empty string.") 972 if not pattern.search(value): 973 raise ValueError("The module name must be a valid Python identifier.") 974 self._module = value
    975
    976 - def _getModule(self):
    977 """ 978 Property target used to get the module name. 979 """ 980 return self._module
    981
    982 - def _setFunction(self, value):
    983 """ 984 Property target used to set the function name. 985 The value must be a non-empty string if it is not C{None}. 986 It must also be a valid Python identifier. 987 @raise ValueError: If the value is an empty string. 988 """ 989 pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") 990 if value is not None: 991 if len(value) < 1: 992 raise ValueError("The function name must be a non-empty string.") 993 if not pattern.search(value): 994 raise ValueError("The function name must be a valid Python identifier.") 995 self._function = value
    996
    997 - def _getFunction(self):
    998 """ 999 Property target used to get the function name. 1000 """ 1001 return self._function
    1002
    1003 - def _setIndex(self, value):
    1004 """ 1005 Property target used to set the action index. 1006 The value must be an integer >= 0. 1007 @raise ValueError: If the value is not valid. 1008 """ 1009 if value is None: 1010 self._index = None 1011 else: 1012 try: 1013 value = int(value) 1014 except TypeError: 1015 raise ValueError("Action index value must be an integer >= 0.") 1016 if value < 0: 1017 raise ValueError("Action index value must be an integer >= 0.") 1018 self._index = value
    1019
    1020 - def _getIndex(self):
    1021 """ 1022 Property target used to get the action index. 1023 """ 1024 return self._index
    1025
    1026 - def _setDependencies(self, value):
    1027 """ 1028 Property target used to set the action dependencies information. 1029 If not C{None}, the value must be a C{ActionDependecies} object. 1030 @raise ValueError: If the value is not a C{ActionDependencies} object. 1031 """ 1032 if value is None: 1033 self._dependencies = None 1034 else: 1035 if not isinstance(value, ActionDependencies): 1036 raise ValueError("Value must be a C{ActionDependencies} object.") 1037 self._dependencies = value
    1038
    1039 - def _getDependencies(self):
    1040 """ 1041 Property target used to get action dependencies information. 1042 """ 1043 return self._dependencies
    1044 1045 name = property(_getName, _setName, None, "Name of the extended action.") 1046 module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") 1047 function = property(_getFunction, _setFunction, None, "Name of the extended action function.") 1048 index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") 1049 dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.")
    1050
    1051 1052 ######################################################################## 1053 # CommandOverride class definition 1054 ######################################################################## 1055 1056 -class CommandOverride(object):
    1057 1058 """ 1059 Class representing a piece of Cedar Backup command override configuration. 1060 1061 The following restrictions exist on data in this class: 1062 1063 - The absolute path must be absolute 1064 1065 @note: Lists within this class are "unordered" for equality comparisons. 1066 1067 @sort: __init__, __repr__, __str__, __cmp__, command, absolutePath 1068 """ 1069
    1070 - def __init__(self, command=None, absolutePath=None):
    1071 """ 1072 Constructor for the C{CommandOverride} class. 1073 1074 @param command: Name of command to be overridden. 1075 @param absolutePath: Absolute path of the overrridden command. 1076 1077 @raise ValueError: If one of the values is invalid. 1078 """ 1079 self._command = None 1080 self._absolutePath = None 1081 self.command = command 1082 self.absolutePath = absolutePath
    1083
    1084 - def __repr__(self):
    1085 """ 1086 Official string representation for class instance. 1087 """ 1088 return "CommandOverride(%s, %s)" % (self.command, self.absolutePath)
    1089
    1090 - def __str__(self):
    1091 """ 1092 Informal string representation for class instance. 1093 """ 1094 return self.__repr__()
    1095
    1096 - def __cmp__(self, other):
    1097 """ 1098 Definition of equals operator for this class. 1099 @param other: Other object to compare to. 1100 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1101 """ 1102 if other is None: 1103 return 1 1104 if self.command != other.command: 1105 if self.command < other.command: 1106 return -1 1107 else: 1108 return 1 1109 if self.absolutePath != other.absolutePath: 1110 if self.absolutePath < other.absolutePath: 1111 return -1 1112 else: 1113 return 1 1114 return 0
    1115
    1116 - def _setCommand(self, value):
    1117 """ 1118 Property target used to set the command. 1119 The value must be a non-empty string if it is not C{None}. 1120 @raise ValueError: If the value is an empty string. 1121 """ 1122 if value is not None: 1123 if len(value) < 1: 1124 raise ValueError("The command must be a non-empty string.") 1125 self._command = value
    1126
    1127 - def _getCommand(self):
    1128 """ 1129 Property target used to get the command. 1130 """ 1131 return self._command
    1132
    1133 - def _setAbsolutePath(self, value):
    1134 """ 1135 Property target used to set the absolute path. 1136 The value must be an absolute path if it is not C{None}. 1137 It does not have to exist on disk at the time of assignment. 1138 @raise ValueError: If the value is not an absolute path. 1139 @raise ValueError: If the value cannot be encoded properly. 1140 """ 1141 if value is not None: 1142 if not os.path.isabs(value): 1143 raise ValueError("Not an absolute path: [%s]" % value) 1144 self._absolutePath = encodePath(value)
    1145
    1146 - def _getAbsolutePath(self):
    1147 """ 1148 Property target used to get the absolute path. 1149 """ 1150 return self._absolutePath
    1151 1152 command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") 1153 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.")
    1154
    1155 1156 ######################################################################## 1157 # CollectFile class definition 1158 ######################################################################## 1159 1160 -class CollectFile(object):
    1161 1162 """ 1163 Class representing a Cedar Backup collect file. 1164 1165 The following restrictions exist on data in this class: 1166 1167 - Absolute paths must be absolute 1168 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1169 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1170 1171 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode 1172 """ 1173
    1174 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None):
    1175 """ 1176 Constructor for the C{CollectFile} class. 1177 1178 @param absolutePath: Absolute path of the file to collect. 1179 @param collectMode: Overridden collect mode for this file. 1180 @param archiveMode: Overridden archive mode for this file. 1181 1182 @raise ValueError: If one of the values is invalid. 1183 """ 1184 self._absolutePath = None 1185 self._collectMode = None 1186 self._archiveMode = None 1187 self.absolutePath = absolutePath 1188 self.collectMode = collectMode 1189 self.archiveMode = archiveMode
    1190
    1191 - def __repr__(self):
    1192 """ 1193 Official string representation for class instance. 1194 """ 1195 return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode)
    1196
    1197 - def __str__(self):
    1198 """ 1199 Informal string representation for class instance. 1200 """ 1201 return self.__repr__()
    1202
    1203 - def __cmp__(self, other):
    1204 """ 1205 Definition of equals operator for this class. 1206 @param other: Other object to compare to. 1207 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1208 """ 1209 if other is None: 1210 return 1 1211 if self.absolutePath != other.absolutePath: 1212 if self.absolutePath < other.absolutePath: 1213 return -1 1214 else: 1215 return 1 1216 if self.collectMode != other.collectMode: 1217 if self.collectMode < other.collectMode: 1218 return -1 1219 else: 1220 return 1 1221 if self.archiveMode != other.archiveMode: 1222 if self.archiveMode < other.archiveMode: 1223 return -1 1224 else: 1225 return 1 1226 return 0
    1227
    1228 - def _setAbsolutePath(self, value):
    1229 """ 1230 Property target used to set the absolute path. 1231 The value must be an absolute path if it is not C{None}. 1232 It does not have to exist on disk at the time of assignment. 1233 @raise ValueError: If the value is not an absolute path. 1234 @raise ValueError: If the value cannot be encoded properly. 1235 """ 1236 if value is not None: 1237 if not os.path.isabs(value): 1238 raise ValueError("Not an absolute path: [%s]" % value) 1239 self._absolutePath = encodePath(value)
    1240
    1241 - def _getAbsolutePath(self):
    1242 """ 1243 Property target used to get the absolute path. 1244 """ 1245 return self._absolutePath
    1246
    1247 - def _setCollectMode(self, value):
    1248 """ 1249 Property target used to set the collect mode. 1250 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1251 @raise ValueError: If the value is not valid. 1252 """ 1253 if value is not None: 1254 if value not in VALID_COLLECT_MODES: 1255 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1256 self._collectMode = value
    1257
    1258 - def _getCollectMode(self):
    1259 """ 1260 Property target used to get the collect mode. 1261 """ 1262 return self._collectMode
    1263
    1264 - def _setArchiveMode(self, value):
    1265 """ 1266 Property target used to set the archive mode. 1267 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1268 @raise ValueError: If the value is not valid. 1269 """ 1270 if value is not None: 1271 if value not in VALID_ARCHIVE_MODES: 1272 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1273 self._archiveMode = value
    1274
    1275 - def _getArchiveMode(self):
    1276 """ 1277 Property target used to get the archive mode. 1278 """ 1279 return self._archiveMode
    1280 1281 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") 1282 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") 1283 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.")
    1284
    1285 1286 ######################################################################## 1287 # CollectDir class definition 1288 ######################################################################## 1289 1290 -class CollectDir(object):
    1291 1292 """ 1293 Class representing a Cedar Backup collect directory. 1294 1295 The following restrictions exist on data in this class: 1296 1297 - Absolute paths must be absolute 1298 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1299 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1300 - The ignore file must be a non-empty string. 1301 1302 For the C{absoluteExcludePaths} list, validation is accomplished through the 1303 L{util.AbsolutePathList} list implementation that overrides common list 1304 methods and transparently does the absolute path validation for us. 1305 1306 @note: Lists within this class are "unordered" for equality comparisons. 1307 1308 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, 1309 archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, 1310 relativeExcludePaths, excludePatterns 1311 """ 1312
    1313 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, 1314 absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, 1315 linkDepth=None, dereference=False, recursionLevel=None):
    1316 """ 1317 Constructor for the C{CollectDir} class. 1318 1319 @param absolutePath: Absolute path of the directory to collect. 1320 @param collectMode: Overridden collect mode for this directory. 1321 @param archiveMode: Overridden archive mode for this directory. 1322 @param ignoreFile: Overidden ignore file name for this directory. 1323 @param linkDepth: Maximum at which soft links should be followed. 1324 @param dereference: Whether to dereference links that are followed. 1325 @param absoluteExcludePaths: List of absolute paths to exclude. 1326 @param relativeExcludePaths: List of relative paths to exclude. 1327 @param excludePatterns: List of regular expression patterns to exclude. 1328 1329 @raise ValueError: If one of the values is invalid. 1330 """ 1331 self._absolutePath = None 1332 self._collectMode = None 1333 self._archiveMode = None 1334 self._ignoreFile = None 1335 self._linkDepth = None 1336 self._dereference = None 1337 self._recursionLevel = None 1338 self._absoluteExcludePaths = None 1339 self._relativeExcludePaths = None 1340 self._excludePatterns = None 1341 self.absolutePath = absolutePath 1342 self.collectMode = collectMode 1343 self.archiveMode = archiveMode 1344 self.ignoreFile = ignoreFile 1345 self.linkDepth = linkDepth 1346 self.dereference = dereference 1347 self.recursionLevel = recursionLevel 1348 self.absoluteExcludePaths = absoluteExcludePaths 1349 self.relativeExcludePaths = relativeExcludePaths 1350 self.excludePatterns = excludePatterns
    1351
    1352 - def __repr__(self):
    1353 """ 1354 Official string representation for class instance. 1355 """ 1356 return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, 1357 self.archiveMode, self.ignoreFile, 1358 self.absoluteExcludePaths, 1359 self.relativeExcludePaths, 1360 self.excludePatterns, 1361 self.linkDepth, self.dereference, 1362 self.recursionLevel)
    1363
    1364 - def __str__(self):
    1365 """ 1366 Informal string representation for class instance. 1367 """ 1368 return self.__repr__()
    1369
    1370 - def __cmp__(self, other):
    1371 """ 1372 Definition of equals operator for this class. 1373 Lists within this class are "unordered" for equality comparisons. 1374 @param other: Other object to compare to. 1375 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1376 """ 1377 if other is None: 1378 return 1 1379 if self.absolutePath != other.absolutePath: 1380 if self.absolutePath < other.absolutePath: 1381 return -1 1382 else: 1383 return 1 1384 if self.collectMode != other.collectMode: 1385 if self.collectMode < other.collectMode: 1386 return -1 1387 else: 1388 return 1 1389 if self.archiveMode != other.archiveMode: 1390 if self.archiveMode < other.archiveMode: 1391 return -1 1392 else: 1393 return 1 1394 if self.ignoreFile != other.ignoreFile: 1395 if self.ignoreFile < other.ignoreFile: 1396 return -1 1397 else: 1398 return 1 1399 if self.linkDepth != other.linkDepth: 1400 if self.linkDepth < other.linkDepth: 1401 return -1 1402 else: 1403 return 1 1404 if self.dereference != other.dereference: 1405 if self.dereference < other.dereference: 1406 return -1 1407 else: 1408 return 1 1409 if self.recursionLevel != other.recursionLevel: 1410 if self.recursionLevel < other.recursionLevel: 1411 return -1 1412 else: 1413 return 1 1414 if self.absoluteExcludePaths != other.absoluteExcludePaths: 1415 if self.absoluteExcludePaths < other.absoluteExcludePaths: 1416 return -1 1417 else: 1418 return 1 1419 if self.relativeExcludePaths != other.relativeExcludePaths: 1420 if self.relativeExcludePaths < other.relativeExcludePaths: 1421 return -1 1422 else: 1423 return 1 1424 if self.excludePatterns != other.excludePatterns: 1425 if self.excludePatterns < other.excludePatterns: 1426 return -1 1427 else: 1428 return 1 1429 return 0
    1430
    1431 - def _setAbsolutePath(self, value):
    1432 """ 1433 Property target used to set the absolute path. 1434 The value must be an absolute path if it is not C{None}. 1435 It does not have to exist on disk at the time of assignment. 1436 @raise ValueError: If the value is not an absolute path. 1437 @raise ValueError: If the value cannot be encoded properly. 1438 """ 1439 if value is not None: 1440 if not os.path.isabs(value): 1441 raise ValueError("Not an absolute path: [%s]" % value) 1442 self._absolutePath = encodePath(value)
    1443
    1444 - def _getAbsolutePath(self):
    1445 """ 1446 Property target used to get the absolute path. 1447 """ 1448 return self._absolutePath
    1449
    1450 - def _setCollectMode(self, value):
    1451 """ 1452 Property target used to set the collect mode. 1453 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1454 @raise ValueError: If the value is not valid. 1455 """ 1456 if value is not None: 1457 if value not in VALID_COLLECT_MODES: 1458 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1459 self._collectMode = value
    1460
    1461 - def _getCollectMode(self):
    1462 """ 1463 Property target used to get the collect mode. 1464 """ 1465 return self._collectMode
    1466
    1467 - def _setArchiveMode(self, value):
    1468 """ 1469 Property target used to set the archive mode. 1470 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1471 @raise ValueError: If the value is not valid. 1472 """ 1473 if value is not None: 1474 if value not in VALID_ARCHIVE_MODES: 1475 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1476 self._archiveMode = value
    1477
    1478 - def _getArchiveMode(self):
    1479 """ 1480 Property target used to get the archive mode. 1481 """ 1482 return self._archiveMode
    1483
    1484 - def _setIgnoreFile(self, value):
    1485 """ 1486 Property target used to set the ignore file. 1487 The value must be a non-empty string if it is not C{None}. 1488 @raise ValueError: If the value is an empty string. 1489 """ 1490 if value is not None: 1491 if len(value) < 1: 1492 raise ValueError("The ignore file must be a non-empty string.") 1493 self._ignoreFile = value
    1494
    1495 - def _getIgnoreFile(self):
    1496 """ 1497 Property target used to get the ignore file. 1498 """ 1499 return self._ignoreFile
    1500
    1501 - def _setLinkDepth(self, value):
    1502 """ 1503 Property target used to set the link depth. 1504 The value must be an integer >= 0. 1505 @raise ValueError: If the value is not valid. 1506 """ 1507 if value is None: 1508 self._linkDepth = None 1509 else: 1510 try: 1511 value = int(value) 1512 except TypeError: 1513 raise ValueError("Link depth value must be an integer >= 0.") 1514 if value < 0: 1515 raise ValueError("Link depth value must be an integer >= 0.") 1516 self._linkDepth = value
    1517
    1518 - def _getLinkDepth(self):
    1519 """ 1520 Property target used to get the action linkDepth. 1521 """ 1522 return self._linkDepth
    1523
    1524 - def _setDereference(self, value):
    1525 """ 1526 Property target used to set the dereference flag. 1527 No validations, but we normalize the value to C{True} or C{False}. 1528 """ 1529 if value: 1530 self._dereference = True 1531 else: 1532 self._dereference = False
    1533
    1534 - def _getDereference(self):
    1535 """ 1536 Property target used to get the dereference flag. 1537 """ 1538 return self._dereference
    1539
    1540 - def _setRecursionLevel(self, value):
    1541 """ 1542 Property target used to set the recursionLevel. 1543 The value must be an integer. 1544 @raise ValueError: If the value is not valid. 1545 """ 1546 if value is None: 1547 self._recursionLevel = None 1548 else: 1549 try: 1550 value = int(value) 1551 except TypeError: 1552 raise ValueError("Recusion level value must be an integer.") 1553 self._recursionLevel = value
    1554
    1555 - def _getRecursionLevel(self):
    1556 """ 1557 Property target used to get the action recursionLevel. 1558 """ 1559 return self._recursionLevel
    1560
    1561 - def _setAbsoluteExcludePaths(self, value):
    1562 """ 1563 Property target used to set the absolute exclude paths list. 1564 Either the value must be C{None} or each element must be an absolute path. 1565 Elements do not have to exist on disk at the time of assignment. 1566 @raise ValueError: If the value is not an absolute path. 1567 """ 1568 if value is None: 1569 self._absoluteExcludePaths = None 1570 else: 1571 try: 1572 saved = self._absoluteExcludePaths 1573 self._absoluteExcludePaths = AbsolutePathList() 1574 self._absoluteExcludePaths.extend(value) 1575 except Exception, e: 1576 self._absoluteExcludePaths = saved 1577 raise e
    1578
    1579 - def _getAbsoluteExcludePaths(self):
    1580 """ 1581 Property target used to get the absolute exclude paths list. 1582 """ 1583 return self._absoluteExcludePaths
    1584
    1585 - def _setRelativeExcludePaths(self, value):
    1586 """ 1587 Property target used to set the relative exclude paths list. 1588 Elements do not have to exist on disk at the time of assignment. 1589 """ 1590 if value is None: 1591 self._relativeExcludePaths = None 1592 else: 1593 try: 1594 saved = self._relativeExcludePaths 1595 self._relativeExcludePaths = UnorderedList() 1596 self._relativeExcludePaths.extend(value) 1597 except Exception, e: 1598 self._relativeExcludePaths = saved 1599 raise e
    1600
    1601 - def _getRelativeExcludePaths(self):
    1602 """ 1603 Property target used to get the relative exclude paths list. 1604 """ 1605 return self._relativeExcludePaths
    1606
    1607 - def _setExcludePatterns(self, value):
    1608 """ 1609 Property target used to set the exclude patterns list. 1610 """ 1611 if value is None: 1612 self._excludePatterns = None 1613 else: 1614 try: 1615 saved = self._excludePatterns 1616 self._excludePatterns = RegexList() 1617 self._excludePatterns.extend(value) 1618 except Exception, e: 1619 self._excludePatterns = saved 1620 raise e
    1621
    1622 - def _getExcludePatterns(self):
    1623 """ 1624 Property target used to get the exclude patterns list. 1625 """ 1626 return self._excludePatterns
    1627 1628 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") 1629 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") 1630 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") 1631 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") 1632 linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") 1633 dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") 1634 recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") 1635 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 1636 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 1637 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    1638
    1639 1640 ######################################################################## 1641 # PurgeDir class definition 1642 ######################################################################## 1643 1644 -class PurgeDir(object):
    1645 1646 """ 1647 Class representing a Cedar Backup purge directory. 1648 1649 The following restrictions exist on data in this class: 1650 1651 - The absolute path must be an absolute path 1652 - The retain days value must be an integer >= 0. 1653 1654 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, retainDays 1655 """ 1656
    1657 - def __init__(self, absolutePath=None, retainDays=None):
    1658 """ 1659 Constructor for the C{PurgeDir} class. 1660 1661 @param absolutePath: Absolute path of the directory to be purged. 1662 @param retainDays: Number of days content within directory should be retained. 1663 1664 @raise ValueError: If one of the values is invalid. 1665 """ 1666 self._absolutePath = None 1667 self._retainDays = None 1668 self.absolutePath = absolutePath 1669 self.retainDays = retainDays
    1670
    1671 - def __repr__(self):
    1672 """ 1673 Official string representation for class instance. 1674 """ 1675 return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays)
    1676
    1677 - def __str__(self):
    1678 """ 1679 Informal string representation for class instance. 1680 """ 1681 return self.__repr__()
    1682
    1683 - def __cmp__(self, other):
    1684 """ 1685 Definition of equals operator for this class. 1686 @param other: Other object to compare to. 1687 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1688 """ 1689 if other is None: 1690 return 1 1691 if self.absolutePath != other.absolutePath: 1692 if self.absolutePath < other.absolutePath: 1693 return -1 1694 else: 1695 return 1 1696 if self.retainDays != other.retainDays: 1697 if self.retainDays < other.retainDays: 1698 return -1 1699 else: 1700 return 1 1701 return 0
    1702
    1703 - def _setAbsolutePath(self, value):
    1704 """ 1705 Property target used to set the absolute path. 1706 The value must be an absolute path if it is not C{None}. 1707 It does not have to exist on disk at the time of assignment. 1708 @raise ValueError: If the value is not an absolute path. 1709 @raise ValueError: If the value cannot be encoded properly. 1710 """ 1711 if value is not None: 1712 if not os.path.isabs(value): 1713 raise ValueError("Absolute path must, er, be an absolute path.") 1714 self._absolutePath = encodePath(value)
    1715
    1716 - def _getAbsolutePath(self):
    1717 """ 1718 Property target used to get the absolute path. 1719 """ 1720 return self._absolutePath
    1721
    1722 - def _setRetainDays(self, value):
    1723 """ 1724 Property target used to set the retain days value. 1725 The value must be an integer >= 0. 1726 @raise ValueError: If the value is not valid. 1727 """ 1728 if value is None: 1729 self._retainDays = None 1730 else: 1731 try: 1732 value = int(value) 1733 except TypeError: 1734 raise ValueError("Retain days value must be an integer >= 0.") 1735 if value < 0: 1736 raise ValueError("Retain days value must be an integer >= 0.") 1737 self._retainDays = value
    1738
    1739 - def _getRetainDays(self):
    1740 """ 1741 Property target used to get the absolute path. 1742 """ 1743 return self._retainDays
    1744 1745 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") 1746 retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.")
    1747
    1748 1749 ######################################################################## 1750 # LocalPeer class definition 1751 ######################################################################## 1752 1753 -class LocalPeer(object):
    1754 1755 """ 1756 Class representing a Cedar Backup peer. 1757 1758 The following restrictions exist on data in this class: 1759 1760 - The peer name must be a non-empty string. 1761 - The collect directory must be an absolute path. 1762 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 1763 1764 @sort: __init__, __repr__, __str__, __cmp__, name, collectDir 1765 """ 1766
    1767 - def __init__(self, name=None, collectDir=None, ignoreFailureMode=None):
    1768 """ 1769 Constructor for the C{LocalPeer} class. 1770 1771 @param name: Name of the peer, typically a valid hostname. 1772 @param collectDir: Collect directory to stage files from on peer. 1773 @param ignoreFailureMode: Ignore failure mode for peer. 1774 1775 @raise ValueError: If one of the values is invalid. 1776 """ 1777 self._name = None 1778 self._collectDir = None 1779 self._ignoreFailureMode = None 1780 self.name = name 1781 self.collectDir = collectDir 1782 self.ignoreFailureMode = ignoreFailureMode
    1783
    1784 - def __repr__(self):
    1785 """ 1786 Official string representation for class instance. 1787 """ 1788 return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode)
    1789
    1790 - def __str__(self):
    1791 """ 1792 Informal string representation for class instance. 1793 """ 1794 return self.__repr__()
    1795
    1796 - def __cmp__(self, other):
    1797 """ 1798 Definition of equals operator for this class. 1799 @param other: Other object to compare to. 1800 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1801 """ 1802 if other is None: 1803 return 1 1804 if self.name != other.name: 1805 if self.name < other.name: 1806 return -1 1807 else: 1808 return 1 1809 if self.collectDir != other.collectDir: 1810 if self.collectDir < other.collectDir: 1811 return -1 1812 else: 1813 return 1 1814 if self.ignoreFailureMode != other.ignoreFailureMode: 1815 if self.ignoreFailureMode < other.ignoreFailureMode: 1816 return -1 1817 else: 1818 return 1 1819 return 0
    1820
    1821 - def _setName(self, value):
    1822 """ 1823 Property target used to set the peer name. 1824 The value must be a non-empty string if it is not C{None}. 1825 @raise ValueError: If the value is an empty string. 1826 """ 1827 if value is not None: 1828 if len(value) < 1: 1829 raise ValueError("The peer name must be a non-empty string.") 1830 self._name = value
    1831
    1832 - def _getName(self):
    1833 """ 1834 Property target used to get the peer name. 1835 """ 1836 return self._name
    1837
    1838 - def _setCollectDir(self, value):
    1839 """ 1840 Property target used to set the collect directory. 1841 The value must be an absolute path if it is not C{None}. 1842 It does not have to exist on disk at the time of assignment. 1843 @raise ValueError: If the value is not an absolute path. 1844 @raise ValueError: If the value cannot be encoded properly. 1845 """ 1846 if value is not None: 1847 if not os.path.isabs(value): 1848 raise ValueError("Collect directory must be an absolute path.") 1849 self._collectDir = encodePath(value)
    1850
    1851 - def _getCollectDir(self):
    1852 """ 1853 Property target used to get the collect directory. 1854 """ 1855 return self._collectDir
    1856
    1857 - def _setIgnoreFailureMode(self, value):
    1858 """ 1859 Property target used to set the ignoreFailure mode. 1860 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 1861 @raise ValueError: If the value is not valid. 1862 """ 1863 if value is not None: 1864 if value not in VALID_FAILURE_MODES: 1865 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 1866 self._ignoreFailureMode = value
    1867
    1868 - def _getIgnoreFailureMode(self):
    1869 """ 1870 Property target used to get the ignoreFailure mode. 1871 """ 1872 return self._ignoreFailureMode
    1873 1874 name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") 1875 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 1876 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    1877
    1878 1879 ######################################################################## 1880 # RemotePeer class definition 1881 ######################################################################## 1882 1883 -class RemotePeer(object):
    1884 1885 """ 1886 Class representing a Cedar Backup peer. 1887 1888 The following restrictions exist on data in this class: 1889 1890 - The peer name must be a non-empty string. 1891 - The collect directory must be an absolute path. 1892 - The remote user must be a non-empty string. 1893 - The rcp command must be a non-empty string. 1894 - The rsh command must be a non-empty string. 1895 - The cback command must be a non-empty string. 1896 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 1897 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 1898 1899 @sort: __init__, __repr__, __str__, __cmp__, name, collectDir, remoteUser, rcpCommand 1900 """ 1901
    1902 - def __init__(self, name=None, collectDir=None, remoteUser=None, 1903 rcpCommand=None, rshCommand=None, cbackCommand=None, 1904 managed=False, managedActions=None, ignoreFailureMode=None):
    1905 """ 1906 Constructor for the C{RemotePeer} class. 1907 1908 @param name: Name of the peer, must be a valid hostname. 1909 @param collectDir: Collect directory to stage files from on peer. 1910 @param remoteUser: Name of backup user on remote peer. 1911 @param rcpCommand: Overridden rcp-compatible copy command for peer. 1912 @param rshCommand: Overridden rsh-compatible remote shell command for peer. 1913 @param cbackCommand: Overridden cback-compatible command to use on remote peer. 1914 @param managed: Indicates whether this is a managed peer. 1915 @param managedActions: Overridden set of actions that are managed on the peer. 1916 @param ignoreFailureMode: Ignore failure mode for peer. 1917 1918 @raise ValueError: If one of the values is invalid. 1919 """ 1920 self._name = None 1921 self._collectDir = None 1922 self._remoteUser = None 1923 self._rcpCommand = None 1924 self._rshCommand = None 1925 self._cbackCommand = None 1926 self._managed = None 1927 self._managedActions = None 1928 self._ignoreFailureMode = None 1929 self.name = name 1930 self.collectDir = collectDir 1931 self.remoteUser = remoteUser 1932 self.rcpCommand = rcpCommand 1933 self.rshCommand = rshCommand 1934 self.cbackCommand = cbackCommand 1935 self.managed = managed 1936 self.managedActions = managedActions 1937 self.ignoreFailureMode = ignoreFailureMode
    1938
    1939 - def __repr__(self):
    1940 """ 1941 Official string representation for class instance. 1942 """ 1943 return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, 1944 self.rcpCommand, self.rshCommand, self.cbackCommand, 1945 self.managed, self.managedActions, self.ignoreFailureMode)
    1946
    1947 - def __str__(self):
    1948 """ 1949 Informal string representation for class instance. 1950 """ 1951 return self.__repr__()
    1952
    1953 - def __cmp__(self, other):
    1954 """ 1955 Definition of equals operator for this class. 1956 @param other: Other object to compare to. 1957 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1958 """ 1959 if other is None: 1960 return 1 1961 if self.name != other.name: 1962 if self.name < other.name: 1963 return -1 1964 else: 1965 return 1 1966 if self.collectDir != other.collectDir: 1967 if self.collectDir < other.collectDir: 1968 return -1 1969 else: 1970 return 1 1971 if self.remoteUser != other.remoteUser: 1972 if self.remoteUser < other.remoteUser: 1973 return -1 1974 else: 1975 return 1 1976 if self.rcpCommand != other.rcpCommand: 1977 if self.rcpCommand < other.rcpCommand: 1978 return -1 1979 else: 1980 return 1 1981 if self.rshCommand != other.rshCommand: 1982 if self.rshCommand < other.rshCommand: 1983 return -1 1984 else: 1985 return 1 1986 if self.cbackCommand != other.cbackCommand: 1987 if self.cbackCommand < other.cbackCommand: 1988 return -1 1989 else: 1990 return 1 1991 if self.managed != other.managed: 1992 if self.managed < other.managed: 1993 return -1 1994 else: 1995 return 1 1996 if self.managedActions != other.managedActions: 1997 if self.managedActions < other.managedActions: 1998 return -1 1999 else: 2000 return 1 2001 if self.ignoreFailureMode != other.ignoreFailureMode: 2002 if self.ignoreFailureMode < other.ignoreFailureMode: 2003 return -1 2004 else: 2005 return 1 2006 return 0
    2007
    2008 - def _setName(self, value):
    2009 """ 2010 Property target used to set the peer name. 2011 The value must be a non-empty string if it is not C{None}. 2012 @raise ValueError: If the value is an empty string. 2013 """ 2014 if value is not None: 2015 if len(value) < 1: 2016 raise ValueError("The peer name must be a non-empty string.") 2017 self._name = value
    2018
    2019 - def _getName(self):
    2020 """ 2021 Property target used to get the peer name. 2022 """ 2023 return self._name
    2024
    2025 - def _setCollectDir(self, value):
    2026 """ 2027 Property target used to set the collect directory. 2028 The value must be an absolute path if it is not C{None}. 2029 It does not have to exist on disk at the time of assignment. 2030 @raise ValueError: If the value is not an absolute path. 2031 @raise ValueError: If the value cannot be encoded properly. 2032 """ 2033 if value is not None: 2034 if not os.path.isabs(value): 2035 raise ValueError("Collect directory must be an absolute path.") 2036 self._collectDir = encodePath(value)
    2037
    2038 - def _getCollectDir(self):
    2039 """ 2040 Property target used to get the collect directory. 2041 """ 2042 return self._collectDir
    2043
    2044 - def _setRemoteUser(self, value):
    2045 """ 2046 Property target used to set the remote user. 2047 The value must be a non-empty string if it is not C{None}. 2048 @raise ValueError: If the value is an empty string. 2049 """ 2050 if value is not None: 2051 if len(value) < 1: 2052 raise ValueError("The remote user must be a non-empty string.") 2053 self._remoteUser = value
    2054
    2055 - def _getRemoteUser(self):
    2056 """ 2057 Property target used to get the remote user. 2058 """ 2059 return self._remoteUser
    2060
    2061 - def _setRcpCommand(self, value):
    2062 """ 2063 Property target used to set the rcp command. 2064 The value must be a non-empty string if it is not C{None}. 2065 @raise ValueError: If the value is an empty string. 2066 """ 2067 if value is not None: 2068 if len(value) < 1: 2069 raise ValueError("The rcp command must be a non-empty string.") 2070 self._rcpCommand = value
    2071
    2072 - def _getRcpCommand(self):
    2073 """ 2074 Property target used to get the rcp command. 2075 """ 2076 return self._rcpCommand
    2077
    2078 - def _setRshCommand(self, value):
    2079 """ 2080 Property target used to set the rsh command. 2081 The value must be a non-empty string if it is not C{None}. 2082 @raise ValueError: If the value is an empty string. 2083 """ 2084 if value is not None: 2085 if len(value) < 1: 2086 raise ValueError("The rsh command must be a non-empty string.") 2087 self._rshCommand = value
    2088
    2089 - def _getRshCommand(self):
    2090 """ 2091 Property target used to get the rsh command. 2092 """ 2093 return self._rshCommand
    2094
    2095 - def _setCbackCommand(self, value):
    2096 """ 2097 Property target used to set the cback command. 2098 The value must be a non-empty string if it is not C{None}. 2099 @raise ValueError: If the value is an empty string. 2100 """ 2101 if value is not None: 2102 if len(value) < 1: 2103 raise ValueError("The cback command must be a non-empty string.") 2104 self._cbackCommand = value
    2105
    2106 - def _getCbackCommand(self):
    2107 """ 2108 Property target used to get the cback command. 2109 """ 2110 return self._cbackCommand
    2111
    2112 - def _setManaged(self, value):
    2113 """ 2114 Property target used to set the managed flag. 2115 No validations, but we normalize the value to C{True} or C{False}. 2116 """ 2117 if value: 2118 self._managed = True 2119 else: 2120 self._managed = False
    2121
    2122 - def _getManaged(self):
    2123 """ 2124 Property target used to get the managed flag. 2125 """ 2126 return self._managed
    2127
    2128 - def _setManagedActions(self, value):
    2129 """ 2130 Property target used to set the managed actions list. 2131 Elements do not have to exist on disk at the time of assignment. 2132 """ 2133 if value is None: 2134 self._managedActions = None 2135 else: 2136 try: 2137 saved = self._managedActions 2138 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2139 self._managedActions.extend(value) 2140 except Exception, e: 2141 self._managedActions = saved 2142 raise e
    2143
    2144 - def _getManagedActions(self):
    2145 """ 2146 Property target used to get the managed actions list. 2147 """ 2148 return self._managedActions
    2149
    2150 - def _setIgnoreFailureMode(self, value):
    2151 """ 2152 Property target used to set the ignoreFailure mode. 2153 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 2154 @raise ValueError: If the value is not valid. 2155 """ 2156 if value is not None: 2157 if value not in VALID_FAILURE_MODES: 2158 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 2159 self._ignoreFailureMode = value
    2160
    2161 - def _getIgnoreFailureMode(self):
    2162 """ 2163 Property target used to get the ignoreFailure mode. 2164 """ 2165 return self._ignoreFailureMode
    2166 2167 name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") 2168 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 2169 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") 2170 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") 2171 rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") 2172 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") 2173 managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") 2174 managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") 2175 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    2176
    2177 2178 ######################################################################## 2179 # ReferenceConfig class definition 2180 ######################################################################## 2181 2182 -class ReferenceConfig(object):
    2183 2184 """ 2185 Class representing a Cedar Backup reference configuration. 2186 2187 The reference information is just used for saving off metadata about 2188 configuration and exists mostly for backwards-compatibility with Cedar 2189 Backup 1.x. 2190 2191 @sort: __init__, __repr__, __str__, __cmp__, author, revision, description, generator 2192 """ 2193
    2194 - def __init__(self, author=None, revision=None, description=None, generator=None):
    2195 """ 2196 Constructor for the C{ReferenceConfig} class. 2197 2198 @param author: Author of the configuration file. 2199 @param revision: Revision of the configuration file. 2200 @param description: Description of the configuration file. 2201 @param generator: Tool that generated the configuration file. 2202 """ 2203 self._author = None 2204 self._revision = None 2205 self._description = None 2206 self._generator = None 2207 self.author = author 2208 self.revision = revision 2209 self.description = description 2210 self.generator = generator
    2211
    2212 - def __repr__(self):
    2213 """ 2214 Official string representation for class instance. 2215 """ 2216 return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator)
    2217
    2218 - def __str__(self):
    2219 """ 2220 Informal string representation for class instance. 2221 """ 2222 return self.__repr__()
    2223
    2224 - def __cmp__(self, other):
    2225 """ 2226 Definition of equals operator for this class. 2227 @param other: Other object to compare to. 2228 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2229 """ 2230 if other is None: 2231 return 1 2232 if self.author != other.author: 2233 if self.author < other.author: 2234 return -1 2235 else: 2236 return 1 2237 if self.revision != other.revision: 2238 if self.revision < other.revision: 2239 return -1 2240 else: 2241 return 1 2242 if self.description != other.description: 2243 if self.description < other.description: 2244 return -1 2245 else: 2246 return 1 2247 if self.generator != other.generator: 2248 if self.generator < other.generator: 2249 return -1 2250 else: 2251 return 1 2252 return 0
    2253
    2254 - def _setAuthor(self, value):
    2255 """ 2256 Property target used to set the author value. 2257 No validations. 2258 """ 2259 self._author = value
    2260
    2261 - def _getAuthor(self):
    2262 """ 2263 Property target used to get the author value. 2264 """ 2265 return self._author
    2266
    2267 - def _setRevision(self, value):
    2268 """ 2269 Property target used to set the revision value. 2270 No validations. 2271 """ 2272 self._revision = value
    2273
    2274 - def _getRevision(self):
    2275 """ 2276 Property target used to get the revision value. 2277 """ 2278 return self._revision
    2279
    2280 - def _setDescription(self, value):
    2281 """ 2282 Property target used to set the description value. 2283 No validations. 2284 """ 2285 self._description = value
    2286
    2287 - def _getDescription(self):
    2288 """ 2289 Property target used to get the description value. 2290 """ 2291 return self._description
    2292
    2293 - def _setGenerator(self, value):
    2294 """ 2295 Property target used to set the generator value. 2296 No validations. 2297 """ 2298 self._generator = value
    2299
    2300 - def _getGenerator(self):
    2301 """ 2302 Property target used to get the generator value. 2303 """ 2304 return self._generator
    2305 2306 author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") 2307 revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") 2308 description = property(_getDescription, _setDescription, None, "Description of the configuration file.") 2309 generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.")
    2310
    2311 2312 ######################################################################## 2313 # ExtensionsConfig class definition 2314 ######################################################################## 2315 2316 -class ExtensionsConfig(object):
    2317 2318 """ 2319 Class representing Cedar Backup extensions configuration. 2320 2321 Extensions configuration is used to specify "extended actions" implemented 2322 by code external to Cedar Backup. For instance, a hypothetical third party 2323 might write extension code to collect database repository data. If they 2324 write a properly-formatted extension function, they can use the extension 2325 configuration to map a command-line Cedar Backup action (i.e. "database") 2326 to their function. 2327 2328 The following restrictions exist on data in this class: 2329 2330 - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} 2331 - The actions list must be a list of C{ExtendedAction} objects. 2332 2333 @sort: __init__, __repr__, __str__, __cmp__, orderMode, actions 2334 """ 2335
    2336 - def __init__(self, actions=None, orderMode=None):
    2337 """ 2338 Constructor for the C{ExtensionsConfig} class. 2339 @param actions: List of extended actions 2340 """ 2341 self._orderMode = None 2342 self._actions = None 2343 self.orderMode = orderMode 2344 self.actions = actions
    2345
    2346 - def __repr__(self):
    2347 """ 2348 Official string representation for class instance. 2349 """ 2350 return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions)
    2351
    2352 - def __str__(self):
    2353 """ 2354 Informal string representation for class instance. 2355 """ 2356 return self.__repr__()
    2357
    2358 - def __cmp__(self, other):
    2359 """ 2360 Definition of equals operator for this class. 2361 @param other: Other object to compare to. 2362 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2363 """ 2364 if other is None: 2365 return 1 2366 if self.orderMode != other.orderMode: 2367 if self.orderMode < other.orderMode: 2368 return -1 2369 else: 2370 return 1 2371 if self.actions != other.actions: 2372 if self.actions < other.actions: 2373 return -1 2374 else: 2375 return 1 2376 return 0
    2377
    2378 - def _setOrderMode(self, value):
    2379 """ 2380 Property target used to set the order mode. 2381 The value must be one of L{VALID_ORDER_MODES}. 2382 @raise ValueError: If the value is not valid. 2383 """ 2384 if value is not None: 2385 if value not in VALID_ORDER_MODES: 2386 raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) 2387 self._orderMode = value
    2388
    2389 - def _getOrderMode(self):
    2390 """ 2391 Property target used to get the order mode. 2392 """ 2393 return self._orderMode
    2394
    2395 - def _setActions(self, value):
    2396 """ 2397 Property target used to set the actions list. 2398 Either the value must be C{None} or each element must be an C{ExtendedAction}. 2399 @raise ValueError: If the value is not a C{ExtendedAction} 2400 """ 2401 if value is None: 2402 self._actions = None 2403 else: 2404 try: 2405 saved = self._actions 2406 self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") 2407 self._actions.extend(value) 2408 except Exception, e: 2409 self._actions = saved 2410 raise e
    2411
    2412 - def _getActions(self):
    2413 """ 2414 Property target used to get the actions list. 2415 """ 2416 return self._actions
    2417 2418 orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") 2419 actions = property(_getActions, _setActions, None, "List of extended actions.")
    2420
    2421 2422 ######################################################################## 2423 # OptionsConfig class definition 2424 ######################################################################## 2425 2426 -class OptionsConfig(object):
    2427 2428 """ 2429 Class representing a Cedar Backup global options configuration. 2430 2431 The options section is used to store global configuration options and 2432 defaults that can be applied to other sections. 2433 2434 The following restrictions exist on data in this class: 2435 2436 - The working directory must be an absolute path. 2437 - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. 2438 - All of the other values must be non-empty strings if they are set to something other than C{None}. 2439 - The overrides list must be a list of C{CommandOverride} objects. 2440 - The hooks list must be a list of C{ActionHook} objects. 2441 - The cback command must be a non-empty string. 2442 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 2443 2444 @sort: __init__, __repr__, __str__, __cmp__, startingDay, workingDir, 2445 backupUser, backupGroup, rcpCommand, rshCommand, overrides 2446 """ 2447
    2448 - def __init__(self, startingDay=None, workingDir=None, backupUser=None, 2449 backupGroup=None, rcpCommand=None, overrides=None, 2450 hooks=None, rshCommand=None, cbackCommand=None, 2451 managedActions=None):
    2452 """ 2453 Constructor for the C{OptionsConfig} class. 2454 2455 @param startingDay: Day that starts the week. 2456 @param workingDir: Working (temporary) directory to use for backups. 2457 @param backupUser: Effective user that backups should run as. 2458 @param backupGroup: Effective group that backups should run as. 2459 @param rcpCommand: Default rcp-compatible copy command for staging. 2460 @param rshCommand: Default rsh-compatible command to use for remote shells. 2461 @param cbackCommand: Default cback-compatible command to use on managed remote peers. 2462 @param overrides: List of configured command path overrides, if any. 2463 @param hooks: List of configured pre- and post-action hooks. 2464 @param managedActions: Default set of actions that are managed on remote peers. 2465 2466 @raise ValueError: If one of the values is invalid. 2467 """ 2468 self._startingDay = None 2469 self._workingDir = None 2470 self._backupUser = None 2471 self._backupGroup = None 2472 self._rcpCommand = None 2473 self._rshCommand = None 2474 self._cbackCommand = None 2475 self._overrides = None 2476 self._hooks = None 2477 self._managedActions = None 2478 self.startingDay = startingDay 2479 self.workingDir = workingDir 2480 self.backupUser = backupUser 2481 self.backupGroup = backupGroup 2482 self.rcpCommand = rcpCommand 2483 self.rshCommand = rshCommand 2484 self.cbackCommand = cbackCommand 2485 self.overrides = overrides 2486 self.hooks = hooks 2487 self.managedActions = managedActions
    2488
    2489 - def __repr__(self):
    2490 """ 2491 Official string representation for class instance. 2492 """ 2493 return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, 2494 self.backupUser, self.backupGroup, 2495 self.rcpCommand, self.overrides, 2496 self.hooks, self.rshCommand, 2497 self.cbackCommand, self.managedActions)
    2498
    2499 - def __str__(self):
    2500 """ 2501 Informal string representation for class instance. 2502 """ 2503 return self.__repr__()
    2504
    2505 - def __cmp__(self, other):
    2506 """ 2507 Definition of equals operator for this class. 2508 @param other: Other object to compare to. 2509 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2510 """ 2511 if other is None: 2512 return 1 2513 if self.startingDay != other.startingDay: 2514 if self.startingDay < other.startingDay: 2515 return -1 2516 else: 2517 return 1 2518 if self.workingDir != other.workingDir: 2519 if self.workingDir < other.workingDir: 2520 return -1 2521 else: 2522 return 1 2523 if self.backupUser != other.backupUser: 2524 if self.backupUser < other.backupUser: 2525 return -1 2526 else: 2527 return 1 2528 if self.backupGroup != other.backupGroup: 2529 if self.backupGroup < other.backupGroup: 2530 return -1 2531 else: 2532 return 1 2533 if self.rcpCommand != other.rcpCommand: 2534 if self.rcpCommand < other.rcpCommand: 2535 return -1 2536 else: 2537 return 1 2538 if self.rshCommand != other.rshCommand: 2539 if self.rshCommand < other.rshCommand: 2540 return -1 2541 else: 2542 return 1 2543 if self.cbackCommand != other.cbackCommand: 2544 if self.cbackCommand < other.cbackCommand: 2545 return -1 2546 else: 2547 return 1 2548 if self.overrides != other.overrides: 2549 if self.overrides < other.overrides: 2550 return -1 2551 else: 2552 return 1 2553 if self.hooks != other.hooks: 2554 if self.hooks < other.hooks: 2555 return -1 2556 else: 2557 return 1 2558 if self.managedActions != other.managedActions: 2559 if self.managedActions < other.managedActions: 2560 return -1 2561 else: 2562 return 1 2563 return 0
    2564
    2565 - def addOverride(self, command, absolutePath):
    2566 """ 2567 If no override currently exists for the command, add one. 2568 @param command: Name of command to be overridden. 2569 @param absolutePath: Absolute path of the overrridden command. 2570 """ 2571 override = CommandOverride(command, absolutePath) 2572 if self.overrides is None: 2573 self.overrides = [ override, ] 2574 else: 2575 exists = False 2576 for obj in self.overrides: 2577 if obj.command == override.command: 2578 exists = True 2579 break 2580 if not exists: 2581 self.overrides.append(override)
    2582
    2583 - def replaceOverride(self, command, absolutePath):
    2584 """ 2585 If override currently exists for the command, replace it; otherwise add it. 2586 @param command: Name of command to be overridden. 2587 @param absolutePath: Absolute path of the overrridden command. 2588 """ 2589 override = CommandOverride(command, absolutePath) 2590 if self.overrides is None: 2591 self.overrides = [ override, ] 2592 else: 2593 exists = False 2594 for obj in self.overrides: 2595 if obj.command == override.command: 2596 exists = True 2597 obj.absolutePath = override.absolutePath 2598 break 2599 if not exists: 2600 self.overrides.append(override)
    2601
    2602 - def _setStartingDay(self, value):
    2603 """ 2604 Property target used to set the starting day. 2605 If it is not C{None}, the value must be a valid English day of the week, 2606 one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. 2607 @raise ValueError: If the value is not a valid day of the week. 2608 """ 2609 if value is not None: 2610 if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: 2611 raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") 2612 self._startingDay = value
    2613
    2614 - def _getStartingDay(self):
    2615 """ 2616 Property target used to get the starting day. 2617 """ 2618 return self._startingDay
    2619
    2620 - def _setWorkingDir(self, value):
    2621 """ 2622 Property target used to set the working directory. 2623 The value must be an absolute path if it is not C{None}. 2624 It does not have to exist on disk at the time of assignment. 2625 @raise ValueError: If the value is not an absolute path. 2626 @raise ValueError: If the value cannot be encoded properly. 2627 """ 2628 if value is not None: 2629 if not os.path.isabs(value): 2630 raise ValueError("Working directory must be an absolute path.") 2631 self._workingDir = encodePath(value)
    2632
    2633 - def _getWorkingDir(self):
    2634 """ 2635 Property target used to get the working directory. 2636 """ 2637 return self._workingDir
    2638
    2639 - def _setBackupUser(self, value):
    2640 """ 2641 Property target used to set the backup user. 2642 The value must be a non-empty string if it is not C{None}. 2643 @raise ValueError: If the value is an empty string. 2644 """ 2645 if value is not None: 2646 if len(value) < 1: 2647 raise ValueError("Backup user must be a non-empty string.") 2648 self._backupUser = value
    2649
    2650 - def _getBackupUser(self):
    2651 """ 2652 Property target used to get the backup user. 2653 """ 2654 return self._backupUser
    2655
    2656 - def _setBackupGroup(self, value):
    2657 """ 2658 Property target used to set the backup group. 2659 The value must be a non-empty string if it is not C{None}. 2660 @raise ValueError: If the value is an empty string. 2661 """ 2662 if value is not None: 2663 if len(value) < 1: 2664 raise ValueError("Backup group must be a non-empty string.") 2665 self._backupGroup = value
    2666
    2667 - def _getBackupGroup(self):
    2668 """ 2669 Property target used to get the backup group. 2670 """ 2671 return self._backupGroup
    2672
    2673 - def _setRcpCommand(self, value):
    2674 """ 2675 Property target used to set the rcp command. 2676 The value must be a non-empty string if it is not C{None}. 2677 @raise ValueError: If the value is an empty string. 2678 """ 2679 if value is not None: 2680 if len(value) < 1: 2681 raise ValueError("The rcp command must be a non-empty string.") 2682 self._rcpCommand = value
    2683
    2684 - def _getRcpCommand(self):
    2685 """ 2686 Property target used to get the rcp command. 2687 """ 2688 return self._rcpCommand
    2689
    2690 - def _setRshCommand(self, value):
    2691 """ 2692 Property target used to set the rsh command. 2693 The value must be a non-empty string if it is not C{None}. 2694 @raise ValueError: If the value is an empty string. 2695 """ 2696 if value is not None: 2697 if len(value) < 1: 2698 raise ValueError("The rsh command must be a non-empty string.") 2699 self._rshCommand = value
    2700
    2701 - def _getRshCommand(self):
    2702 """ 2703 Property target used to get the rsh command. 2704 """ 2705 return self._rshCommand
    2706
    2707 - def _setCbackCommand(self, value):
    2708 """ 2709 Property target used to set the cback command. 2710 The value must be a non-empty string if it is not C{None}. 2711 @raise ValueError: If the value is an empty string. 2712 """ 2713 if value is not None: 2714 if len(value) < 1: 2715 raise ValueError("The cback command must be a non-empty string.") 2716 self._cbackCommand = value
    2717
    2718 - def _getCbackCommand(self):
    2719 """ 2720 Property target used to get the cback command. 2721 """ 2722 return self._cbackCommand
    2723
    2724 - def _setOverrides(self, value):
    2725 """ 2726 Property target used to set the command path overrides list. 2727 Either the value must be C{None} or each element must be a C{CommandOverride}. 2728 @raise ValueError: If the value is not a C{CommandOverride} 2729 """ 2730 if value is None: 2731 self._overrides = None 2732 else: 2733 try: 2734 saved = self._overrides 2735 self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") 2736 self._overrides.extend(value) 2737 except Exception, e: 2738 self._overrides = saved 2739 raise e
    2740
    2741 - def _getOverrides(self):
    2742 """ 2743 Property target used to get the command path overrides list. 2744 """ 2745 return self._overrides
    2746
    2747 - def _setHooks(self, value):
    2748 """ 2749 Property target used to set the pre- and post-action hooks list. 2750 Either the value must be C{None} or each element must be an C{ActionHook}. 2751 @raise ValueError: If the value is not a C{CommandOverride} 2752 """ 2753 if value is None: 2754 self._hooks = None 2755 else: 2756 try: 2757 saved = self._hooks 2758 self._hooks = ObjectTypeList(ActionHook, "ActionHook") 2759 self._hooks.extend(value) 2760 except Exception, e: 2761 self._hooks = saved 2762 raise e
    2763
    2764 - def _getHooks(self):
    2765 """ 2766 Property target used to get the command path hooks list. 2767 """ 2768 return self._hooks
    2769
    2770 - def _setManagedActions(self, value):
    2771 """ 2772 Property target used to set the managed actions list. 2773 Elements do not have to exist on disk at the time of assignment. 2774 """ 2775 if value is None: 2776 self._managedActions = None 2777 else: 2778 try: 2779 saved = self._managedActions 2780 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2781 self._managedActions.extend(value) 2782 except Exception, e: 2783 self._managedActions = saved 2784 raise e
    2785
    2786 - def _getManagedActions(self):
    2787 """ 2788 Property target used to get the managed actions list. 2789 """ 2790 return self._managedActions
    2791 2792 startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") 2793 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") 2794 backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") 2795 backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") 2796 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") 2797 rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") 2798 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") 2799 overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") 2800 hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") 2801 managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.")
    2802
    2803 2804 ######################################################################## 2805 # PeersConfig class definition 2806 ######################################################################## 2807 2808 -class PeersConfig(object):
    2809 2810 """ 2811 Class representing Cedar Backup global peer configuration. 2812 2813 This section contains a list of local and remote peers in a master's backup 2814 pool. The section is optional. If a master does not define this section, 2815 then all peers are unmanaged, and the stage configuration section must 2816 explicitly list any peer that is to be staged. If this section is 2817 configured, then peers may be managed or unmanaged, and the stage section 2818 peer configuration (if any) completely overrides this configuration. 2819 2820 The following restrictions exist on data in this class: 2821 2822 - The list of local peers must contain only C{LocalPeer} objects 2823 - The list of remote peers must contain only C{RemotePeer} objects 2824 2825 @note: Lists within this class are "unordered" for equality comparisons. 2826 2827 @sort: __init__, __repr__, __str__, __cmp__, localPeers, remotePeers 2828 """ 2829
    2830 - def __init__(self, localPeers=None, remotePeers=None):
    2831 """ 2832 Constructor for the C{PeersConfig} class. 2833 2834 @param localPeers: List of local peers. 2835 @param remotePeers: List of remote peers. 2836 2837 @raise ValueError: If one of the values is invalid. 2838 """ 2839 self._localPeers = None 2840 self._remotePeers = None 2841 self.localPeers = localPeers 2842 self.remotePeers = remotePeers
    2843
    2844 - def __repr__(self):
    2845 """ 2846 Official string representation for class instance. 2847 """ 2848 return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers)
    2849
    2850 - def __str__(self):
    2851 """ 2852 Informal string representation for class instance. 2853 """ 2854 return self.__repr__()
    2855
    2856 - def __cmp__(self, other):
    2857 """ 2858 Definition of equals operator for this class. 2859 Lists within this class are "unordered" for equality comparisons. 2860 @param other: Other object to compare to. 2861 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2862 """ 2863 if other is None: 2864 return 1 2865 if self.localPeers != other.localPeers: 2866 if self.localPeers < other.localPeers: 2867 return -1 2868 else: 2869 return 1 2870 if self.remotePeers != other.remotePeers: 2871 if self.remotePeers < other.remotePeers: 2872 return -1 2873 else: 2874 return 1 2875 return 0
    2876
    2877 - def hasPeers(self):
    2878 """ 2879 Indicates whether any peers are filled into this object. 2880 @return: Boolean true if any local or remote peers are filled in, false otherwise. 2881 """ 2882 return ((self.localPeers is not None and len(self.localPeers) > 0) or 2883 (self.remotePeers is not None and len(self.remotePeers) > 0))
    2884
    2885 - def _setLocalPeers(self, value):
    2886 """ 2887 Property target used to set the local peers list. 2888 Either the value must be C{None} or each element must be a C{LocalPeer}. 2889 @raise ValueError: If the value is not an absolute path. 2890 """ 2891 if value is None: 2892 self._localPeers = None 2893 else: 2894 try: 2895 saved = self._localPeers 2896 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 2897 self._localPeers.extend(value) 2898 except Exception, e: 2899 self._localPeers = saved 2900 raise e
    2901
    2902 - def _getLocalPeers(self):
    2903 """ 2904 Property target used to get the local peers list. 2905 """ 2906 return self._localPeers
    2907
    2908 - def _setRemotePeers(self, value):
    2909 """ 2910 Property target used to set the remote peers list. 2911 Either the value must be C{None} or each element must be a C{RemotePeer}. 2912 @raise ValueError: If the value is not a C{RemotePeer} 2913 """ 2914 if value is None: 2915 self._remotePeers = None 2916 else: 2917 try: 2918 saved = self._remotePeers 2919 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 2920 self._remotePeers.extend(value) 2921 except Exception, e: 2922 self._remotePeers = saved 2923 raise e
    2924
    2925 - def _getRemotePeers(self):
    2926 """ 2927 Property target used to get the remote peers list. 2928 """ 2929 return self._remotePeers
    2930 2931 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 2932 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    2933
    2934 2935 ######################################################################## 2936 # CollectConfig class definition 2937 ######################################################################## 2938 2939 -class CollectConfig(object):
    2940 2941 """ 2942 Class representing a Cedar Backup collect configuration. 2943 2944 The following restrictions exist on data in this class: 2945 2946 - The target directory must be an absolute path. 2947 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 2948 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 2949 - The ignore file must be a non-empty string. 2950 - Each of the paths in C{absoluteExcludePaths} must be an absolute path 2951 - The collect file list must be a list of C{CollectFile} objects. 2952 - The collect directory list must be a list of C{CollectDir} objects. 2953 2954 For the C{absoluteExcludePaths} list, validation is accomplished through the 2955 L{util.AbsolutePathList} list implementation that overrides common list 2956 methods and transparently does the absolute path validation for us. 2957 2958 For the C{collectFiles} and C{collectDirs} list, validation is accomplished 2959 through the L{util.ObjectTypeList} list implementation that overrides common 2960 list methods and transparently ensures that each element has an appropriate 2961 type. 2962 2963 @note: Lists within this class are "unordered" for equality comparisons. 2964 2965 @sort: __init__, __repr__, __str__, __cmp__, targetDir, 2966 collectMode, archiveMode, ignoreFile, absoluteExcludePaths, 2967 excludePatterns, collectFiles, collectDirs 2968 """ 2969
    2970 - def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, 2971 absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, 2972 collectDirs=None):
    2973 """ 2974 Constructor for the C{CollectConfig} class. 2975 2976 @param targetDir: Directory to collect files into. 2977 @param collectMode: Default collect mode. 2978 @param archiveMode: Default archive mode for collect files. 2979 @param ignoreFile: Default ignore file name. 2980 @param absoluteExcludePaths: List of absolute paths to exclude. 2981 @param excludePatterns: List of regular expression patterns to exclude. 2982 @param collectFiles: List of collect files. 2983 @param collectDirs: List of collect directories. 2984 2985 @raise ValueError: If one of the values is invalid. 2986 """ 2987 self._targetDir = None 2988 self._collectMode = None 2989 self._archiveMode = None 2990 self._ignoreFile = None 2991 self._absoluteExcludePaths = None 2992 self._excludePatterns = None 2993 self._collectFiles = None 2994 self._collectDirs = None 2995 self.targetDir = targetDir 2996 self.collectMode = collectMode 2997 self.archiveMode = archiveMode 2998 self.ignoreFile = ignoreFile 2999 self.absoluteExcludePaths = absoluteExcludePaths 3000 self.excludePatterns = excludePatterns 3001 self.collectFiles = collectFiles 3002 self.collectDirs = collectDirs
    3003
    3004 - def __repr__(self):
    3005 """ 3006 Official string representation for class instance. 3007 """ 3008 return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, 3009 self.ignoreFile, self.absoluteExcludePaths, 3010 self.excludePatterns, self.collectFiles, self.collectDirs)
    3011
    3012 - def __str__(self):
    3013 """ 3014 Informal string representation for class instance. 3015 """ 3016 return self.__repr__()
    3017
    3018 - def __cmp__(self, other):
    3019 """ 3020 Definition of equals operator for this class. 3021 Lists within this class are "unordered" for equality comparisons. 3022 @param other: Other object to compare to. 3023 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3024 """ 3025 if other is None: 3026 return 1 3027 if self.targetDir != other.targetDir: 3028 if self.targetDir < other.targetDir: 3029 return -1 3030 else: 3031 return 1 3032 if self.collectMode != other.collectMode: 3033 if self.collectMode < other.collectMode: 3034 return -1 3035 else: 3036 return 1 3037 if self.archiveMode != other.archiveMode: 3038 if self.archiveMode < other.archiveMode: 3039 return -1 3040 else: 3041 return 1 3042 if self.ignoreFile != other.ignoreFile: 3043 if self.ignoreFile < other.ignoreFile: 3044 return -1 3045 else: 3046 return 1 3047 if self.absoluteExcludePaths != other.absoluteExcludePaths: 3048 if self.absoluteExcludePaths < other.absoluteExcludePaths: 3049 return -1 3050 else: 3051 return 1 3052 if self.excludePatterns != other.excludePatterns: 3053 if self.excludePatterns < other.excludePatterns: 3054 return -1 3055 else: 3056 return 1 3057 if self.collectFiles != other.collectFiles: 3058 if self.collectFiles < other.collectFiles: 3059 return -1 3060 else: 3061 return 1 3062 if self.collectDirs != other.collectDirs: 3063 if self.collectDirs < other.collectDirs: 3064 return -1 3065 else: 3066 return 1 3067 return 0
    3068
    3069 - def _setTargetDir(self, value):
    3070 """ 3071 Property target used to set the target directory. 3072 The value must be an absolute path if it is not C{None}. 3073 It does not have to exist on disk at the time of assignment. 3074 @raise ValueError: If the value is not an absolute path. 3075 @raise ValueError: If the value cannot be encoded properly. 3076 """ 3077 if value is not None: 3078 if not os.path.isabs(value): 3079 raise ValueError("Target directory must be an absolute path.") 3080 self._targetDir = encodePath(value)
    3081
    3082 - def _getTargetDir(self):
    3083 """ 3084 Property target used to get the target directory. 3085 """ 3086 return self._targetDir
    3087
    3088 - def _setCollectMode(self, value):
    3089 """ 3090 Property target used to set the collect mode. 3091 If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. 3092 @raise ValueError: If the value is not valid. 3093 """ 3094 if value is not None: 3095 if value not in VALID_COLLECT_MODES: 3096 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 3097 self._collectMode = value
    3098
    3099 - def _getCollectMode(self):
    3100 """ 3101 Property target used to get the collect mode. 3102 """ 3103 return self._collectMode
    3104
    3105 - def _setArchiveMode(self, value):
    3106 """ 3107 Property target used to set the archive mode. 3108 If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. 3109 @raise ValueError: If the value is not valid. 3110 """ 3111 if value is not None: 3112 if value not in VALID_ARCHIVE_MODES: 3113 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 3114 self._archiveMode = value
    3115
    3116 - def _getArchiveMode(self):
    3117 """ 3118 Property target used to get the archive mode. 3119 """ 3120 return self._archiveMode
    3121
    3122 - def _setIgnoreFile(self, value):
    3123 """ 3124 Property target used to set the ignore file. 3125 The value must be a non-empty string if it is not C{None}. 3126 @raise ValueError: If the value is an empty string. 3127 @raise ValueError: If the value cannot be encoded properly. 3128 """ 3129 if value is not None: 3130 if len(value) < 1: 3131 raise ValueError("The ignore file must be a non-empty string.") 3132 self._ignoreFile = encodePath(value)
    3133
    3134 - def _getIgnoreFile(self):
    3135 """ 3136 Property target used to get the ignore file. 3137 """ 3138 return self._ignoreFile
    3139
    3140 - def _setAbsoluteExcludePaths(self, value):
    3141 """ 3142 Property target used to set the absolute exclude paths list. 3143 Either the value must be C{None} or each element must be an absolute path. 3144 Elements do not have to exist on disk at the time of assignment. 3145 @raise ValueError: If the value is not an absolute path. 3146 """ 3147 if value is None: 3148 self._absoluteExcludePaths = None 3149 else: 3150 try: 3151 saved = self._absoluteExcludePaths 3152 self._absoluteExcludePaths = AbsolutePathList() 3153 self._absoluteExcludePaths.extend(value) 3154 except Exception, e: 3155 self._absoluteExcludePaths = saved 3156 raise e
    3157
    3158 - def _getAbsoluteExcludePaths(self):
    3159 """ 3160 Property target used to get the absolute exclude paths list. 3161 """ 3162 return self._absoluteExcludePaths
    3163
    3164 - def _setExcludePatterns(self, value):
    3165 """ 3166 Property target used to set the exclude patterns list. 3167 """ 3168 if value is None: 3169 self._excludePatterns = None 3170 else: 3171 try: 3172 saved = self._excludePatterns 3173 self._excludePatterns = RegexList() 3174 self._excludePatterns.extend(value) 3175 except Exception, e: 3176 self._excludePatterns = saved 3177 raise e
    3178
    3179 - def _getExcludePatterns(self):
    3180 """ 3181 Property target used to get the exclude patterns list. 3182 """ 3183 return self._excludePatterns
    3184
    3185 - def _setCollectFiles(self, value):
    3186 """ 3187 Property target used to set the collect files list. 3188 Either the value must be C{None} or each element must be a C{CollectFile}. 3189 @raise ValueError: If the value is not a C{CollectFile} 3190 """ 3191 if value is None: 3192 self._collectFiles = None 3193 else: 3194 try: 3195 saved = self._collectFiles 3196 self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") 3197 self._collectFiles.extend(value) 3198 except Exception, e: 3199 self._collectFiles = saved 3200 raise e
    3201
    3202 - def _getCollectFiles(self):
    3203 """ 3204 Property target used to get the collect files list. 3205 """ 3206 return self._collectFiles
    3207
    3208 - def _setCollectDirs(self, value):
    3209 """ 3210 Property target used to set the collect dirs list. 3211 Either the value must be C{None} or each element must be a C{CollectDir}. 3212 @raise ValueError: If the value is not a C{CollectDir} 3213 """ 3214 if value is None: 3215 self._collectDirs = None 3216 else: 3217 try: 3218 saved = self._collectDirs 3219 self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") 3220 self._collectDirs.extend(value) 3221 except Exception, e: 3222 self._collectDirs = saved 3223 raise e
    3224
    3225 - def _getCollectDirs(self):
    3226 """ 3227 Property target used to get the collect dirs list. 3228 """ 3229 return self._collectDirs
    3230 3231 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") 3232 collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") 3233 archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") 3234 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") 3235 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 3236 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") 3237 collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") 3238 collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.")
    3239
    3240 3241 ######################################################################## 3242 # StageConfig class definition 3243 ######################################################################## 3244 3245 -class StageConfig(object):
    3246 3247 """ 3248 Class representing a Cedar Backup stage configuration. 3249 3250 The following restrictions exist on data in this class: 3251 3252 - The target directory must be an absolute path 3253 - The list of local peers must contain only C{LocalPeer} objects 3254 - The list of remote peers must contain only C{RemotePeer} objects 3255 3256 @note: Lists within this class are "unordered" for equality comparisons. 3257 3258 @sort: __init__, __repr__, __str__, __cmp__, targetDir, localPeers, remotePeers 3259 """ 3260
    3261 - def __init__(self, targetDir=None, localPeers=None, remotePeers=None):
    3262 """ 3263 Constructor for the C{StageConfig} class. 3264 3265 @param targetDir: Directory to stage files into, by peer name. 3266 @param localPeers: List of local peers. 3267 @param remotePeers: List of remote peers. 3268 3269 @raise ValueError: If one of the values is invalid. 3270 """ 3271 self._targetDir = None 3272 self._localPeers = None 3273 self._remotePeers = None 3274 self.targetDir = targetDir 3275 self.localPeers = localPeers 3276 self.remotePeers = remotePeers
    3277
    3278 - def __repr__(self):
    3279 """ 3280 Official string representation for class instance. 3281 """ 3282 return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers)
    3283
    3284 - def __str__(self):
    3285 """ 3286 Informal string representation for class instance. 3287 """ 3288 return self.__repr__()
    3289
    3290 - def __cmp__(self, other):
    3291 """ 3292 Definition of equals operator for this class. 3293 Lists within this class are "unordered" for equality comparisons. 3294 @param other: Other object to compare to. 3295 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3296 """ 3297 if other is None: 3298 return 1 3299 if self.targetDir != other.targetDir: 3300 if self.targetDir < other.targetDir: 3301 return -1 3302 else: 3303 return 1 3304 if self.localPeers != other.localPeers: 3305 if self.localPeers < other.localPeers: 3306 return -1 3307 else: 3308 return 1 3309 if self.remotePeers != other.remotePeers: 3310 if self.remotePeers < other.remotePeers: 3311 return -1 3312 else: 3313 return 1 3314 return 0
    3315
    3316 - def hasPeers(self):
    3317 """ 3318 Indicates whether any peers are filled into this object. 3319 @return: Boolean true if any local or remote peers are filled in, false otherwise. 3320 """ 3321 return ((self.localPeers is not None and len(self.localPeers) > 0) or 3322 (self.remotePeers is not None and len(self.remotePeers) > 0))
    3323
    3324 - def _setTargetDir(self, value):
    3325 """ 3326 Property target used to set the target directory. 3327 The value must be an absolute path if it is not C{None}. 3328 It does not have to exist on disk at the time of assignment. 3329 @raise ValueError: If the value is not an absolute path. 3330 @raise ValueError: If the value cannot be encoded properly. 3331 """ 3332 if value is not None: 3333 if not os.path.isabs(value): 3334 raise ValueError("Target directory must be an absolute path.") 3335 self._targetDir = encodePath(value)
    3336
    3337 - def _getTargetDir(self):
    3338 """ 3339 Property target used to get the target directory. 3340 """ 3341 return self._targetDir
    3342
    3343 - def _setLocalPeers(self, value):
    3344 """ 3345 Property target used to set the local peers list. 3346 Either the value must be C{None} or each element must be a C{LocalPeer}. 3347 @raise ValueError: If the value is not an absolute path. 3348 """ 3349 if value is None: 3350 self._localPeers = None 3351 else: 3352 try: 3353 saved = self._localPeers 3354 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 3355 self._localPeers.extend(value) 3356 except Exception, e: 3357 self._localPeers = saved 3358 raise e
    3359
    3360 - def _getLocalPeers(self):
    3361 """ 3362 Property target used to get the local peers list. 3363 """ 3364 return self._localPeers
    3365
    3366 - def _setRemotePeers(self, value):
    3367 """ 3368 Property target used to set the remote peers list. 3369 Either the value must be C{None} or each element must be a C{RemotePeer}. 3370 @raise ValueError: If the value is not a C{RemotePeer} 3371 """ 3372 if value is None: 3373 self._remotePeers = None 3374 else: 3375 try: 3376 saved = self._remotePeers 3377 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 3378 self._remotePeers.extend(value) 3379 except Exception, e: 3380 self._remotePeers = saved 3381 raise e
    3382
    3383 - def _getRemotePeers(self):
    3384 """ 3385 Property target used to get the remote peers list. 3386 """ 3387 return self._remotePeers
    3388 3389 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") 3390 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 3391 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    3392
    3393 3394 ######################################################################## 3395 # StoreConfig class definition 3396 ######################################################################## 3397 3398 -class StoreConfig(object):
    3399 3400 """ 3401 Class representing a Cedar Backup store configuration. 3402 3403 The following restrictions exist on data in this class: 3404 3405 - The source directory must be an absolute path. 3406 - The media type must be one of the values in L{VALID_MEDIA_TYPES}. 3407 - The device type must be one of the values in L{VALID_DEVICE_TYPES}. 3408 - The device path must be an absolute path. 3409 - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. 3410 - The drive speed must be an integer >= 1 3411 - The blanking behavior must be a C{BlankBehavior} object 3412 - The refresh media delay must be an integer >= 0 3413 - The eject delay must be an integer >= 0 3414 3415 Note that although the blanking factor must be a positive floating point 3416 number, it is stored as a string. This is done so that we can losslessly go 3417 back and forth between XML and object representations of configuration. 3418 3419 @sort: __init__, __repr__, __str__, __cmp__, sourceDir, 3420 mediaType, deviceType, devicePath, deviceScsiId, 3421 driveSpeed, checkData, checkMedia, warnMidnite, noEject, 3422 blankBehavior, refreshMediaDelay, ejectDelay 3423 """ 3424
    3425 - def __init__(self, sourceDir=None, mediaType=None, deviceType=None, 3426 devicePath=None, deviceScsiId=None, driveSpeed=None, 3427 checkData=False, warnMidnite=False, noEject=False, 3428 checkMedia=False, blankBehavior=None, refreshMediaDelay=None, 3429 ejectDelay=None):
    3430 """ 3431 Constructor for the C{StoreConfig} class. 3432 3433 @param sourceDir: Directory whose contents should be written to media. 3434 @param mediaType: Type of the media (see notes above). 3435 @param deviceType: Type of the device (optional, see notes above). 3436 @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. 3437 @param deviceScsiId: SCSI id for writer device, i.e. C{[<method>:]scsibus,target,lun}. 3438 @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. 3439 @param checkData: Whether resulting image should be validated. 3440 @param checkMedia: Whether media should be checked before being written to. 3441 @param warnMidnite: Whether to generate warnings for crossing midnite. 3442 @param noEject: Indicates that the writer device should not be ejected. 3443 @param blankBehavior: Controls optimized blanking behavior. 3444 @param refreshMediaDelay: Delay, in seconds, to add after refreshing media 3445 @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray 3446 3447 @raise ValueError: If one of the values is invalid. 3448 """ 3449 self._sourceDir = None 3450 self._mediaType = None 3451 self._deviceType = None 3452 self._devicePath = None 3453 self._deviceScsiId = None 3454 self._driveSpeed = None 3455 self._checkData = None 3456 self._checkMedia = None 3457 self._warnMidnite = None 3458 self._noEject = None 3459 self._blankBehavior = None 3460 self._refreshMediaDelay = None 3461 self._ejectDelay = None 3462 self.sourceDir = sourceDir 3463 self.mediaType = mediaType 3464 self.deviceType = deviceType 3465 self.devicePath = devicePath 3466 self.deviceScsiId = deviceScsiId 3467 self.driveSpeed = driveSpeed 3468 self.checkData = checkData 3469 self.checkMedia = checkMedia 3470 self.warnMidnite = warnMidnite 3471 self.noEject = noEject 3472 self.blankBehavior = blankBehavior 3473 self.refreshMediaDelay = refreshMediaDelay 3474 self.ejectDelay = ejectDelay
    3475
    3476 - def __repr__(self):
    3477 """ 3478 Official string representation for class instance. 3479 """ 3480 return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( 3481 self.sourceDir, self.mediaType, self.deviceType, 3482 self.devicePath, self.deviceScsiId, self.driveSpeed, 3483 self.checkData, self.warnMidnite, self.noEject, 3484 self.checkMedia, self.blankBehavior, self.refreshMediaDelay, 3485 self.ejectDelay)
    3486
    3487 - def __str__(self):
    3488 """ 3489 Informal string representation for class instance. 3490 """ 3491 return self.__repr__()
    3492
    3493 - def __cmp__(self, other):
    3494 """ 3495 Definition of equals operator for this class. 3496 @param other: Other object to compare to. 3497 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3498 """ 3499 if other is None: 3500 return 1 3501 if self.sourceDir != other.sourceDir: 3502 if self.sourceDir < other.sourceDir: 3503 return -1 3504 else: 3505 return 1 3506 if self.mediaType != other.mediaType: 3507 if self.mediaType < other.mediaType: 3508 return -1 3509 else: 3510 return 1 3511 if self.deviceType != other.deviceType: 3512 if self.deviceType < other.deviceType: 3513 return -1 3514 else: 3515 return 1 3516 if self.devicePath != other.devicePath: 3517 if self.devicePath < other.devicePath: 3518 return -1 3519 else: 3520 return 1 3521 if self.deviceScsiId != other.deviceScsiId: 3522 if self.deviceScsiId < other.deviceScsiId: 3523 return -1 3524 else: 3525 return 1 3526 if self.driveSpeed != other.driveSpeed: 3527 if self.driveSpeed < other.driveSpeed: 3528 return -1 3529 else: 3530 return 1 3531 if self.checkData != other.checkData: 3532 if self.checkData < other.checkData: 3533 return -1 3534 else: 3535 return 1 3536 if self.checkMedia != other.checkMedia: 3537 if self.checkMedia < other.checkMedia: 3538 return -1 3539 else: 3540 return 1 3541 if self.warnMidnite != other.warnMidnite: 3542 if self.warnMidnite < other.warnMidnite: 3543 return -1 3544 else: 3545 return 1 3546 if self.noEject != other.noEject: 3547 if self.noEject < other.noEject: 3548 return -1 3549 else: 3550 return 1 3551 if self.blankBehavior != other.blankBehavior: 3552 if self.blankBehavior < other.blankBehavior: 3553 return -1 3554 else: 3555 return 1 3556 if self.refreshMediaDelay != other.refreshMediaDelay: 3557 if self.refreshMediaDelay < other.refreshMediaDelay: 3558 return -1 3559 else: 3560 return 1 3561 if self.ejectDelay != other.ejectDelay: 3562 if self.ejectDelay < other.ejectDelay: 3563 return -1 3564 else: 3565 return 1 3566 return 0
    3567
    3568 - def _setSourceDir(self, value):
    3569 """ 3570 Property target used to set the source directory. 3571 The value must be an absolute path if it is not C{None}. 3572 It does not have to exist on disk at the time of assignment. 3573 @raise ValueError: If the value is not an absolute path. 3574 @raise ValueError: If the value cannot be encoded properly. 3575 """ 3576 if value is not None: 3577 if not os.path.isabs(value): 3578 raise ValueError("Source directory must be an absolute path.") 3579 self._sourceDir = encodePath(value)
    3580
    3581 - def _getSourceDir(self):
    3582 """ 3583 Property target used to get the source directory. 3584 """ 3585 return self._sourceDir
    3586
    3587 - def _setMediaType(self, value):
    3588 """ 3589 Property target used to set the media type. 3590 The value must be one of L{VALID_MEDIA_TYPES}. 3591 @raise ValueError: If the value is not valid. 3592 """ 3593 if value is not None: 3594 if value not in VALID_MEDIA_TYPES: 3595 raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) 3596 self._mediaType = value
    3597
    3598 - def _getMediaType(self):
    3599 """ 3600 Property target used to get the media type. 3601 """ 3602 return self._mediaType
    3603
    3604 - def _setDeviceType(self, value):
    3605 """ 3606 Property target used to set the device type. 3607 The value must be one of L{VALID_DEVICE_TYPES}. 3608 @raise ValueError: If the value is not valid. 3609 """ 3610 if value is not None: 3611 if value not in VALID_DEVICE_TYPES: 3612 raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) 3613 self._deviceType = value
    3614
    3615 - def _getDeviceType(self):
    3616 """ 3617 Property target used to get the device type. 3618 """ 3619 return self._deviceType
    3620
    3621 - def _setDevicePath(self, value):
    3622 """ 3623 Property target used to set the device path. 3624 The value must be an absolute path if it is not C{None}. 3625 It does not have to exist on disk at the time of assignment. 3626 @raise ValueError: If the value is not an absolute path. 3627 @raise ValueError: If the value cannot be encoded properly. 3628 """ 3629 if value is not None: 3630 if not os.path.isabs(value): 3631 raise ValueError("Device path must be an absolute path.") 3632 self._devicePath = encodePath(value)
    3633
    3634 - def _getDevicePath(self):
    3635 """ 3636 Property target used to get the device path. 3637 """ 3638 return self._devicePath
    3639
    3640 - def _setDeviceScsiId(self, value):
    3641 """ 3642 Property target used to set the SCSI id 3643 The SCSI id must be valid per L{validateScsiId}. 3644 @raise ValueError: If the value is not valid. 3645 """ 3646 if value is None: 3647 self._deviceScsiId = None 3648 else: 3649 self._deviceScsiId = validateScsiId(value)
    3650
    3651 - def _getDeviceScsiId(self):
    3652 """ 3653 Property target used to get the SCSI id. 3654 """ 3655 return self._deviceScsiId
    3656
    3657 - def _setDriveSpeed(self, value):
    3658 """ 3659 Property target used to set the drive speed. 3660 The drive speed must be valid per L{validateDriveSpeed}. 3661 @raise ValueError: If the value is not valid. 3662 """ 3663 self._driveSpeed = validateDriveSpeed(value)
    3664
    3665 - def _getDriveSpeed(self):
    3666 """ 3667 Property target used to get the drive speed. 3668 """ 3669 return self._driveSpeed
    3670
    3671 - def _setCheckData(self, value):
    3672 """ 3673 Property target used to set the check data flag. 3674 No validations, but we normalize the value to C{True} or C{False}. 3675 """ 3676 if value: 3677 self._checkData = True 3678 else: 3679 self._checkData = False
    3680
    3681 - def _getCheckData(self):
    3682 """ 3683 Property target used to get the check data flag. 3684 """ 3685 return self._checkData
    3686
    3687 - def _setCheckMedia(self, value):
    3688 """ 3689 Property target used to set the check media flag. 3690 No validations, but we normalize the value to C{True} or C{False}. 3691 """ 3692 if value: 3693 self._checkMedia = True 3694 else: 3695 self._checkMedia = False
    3696
    3697 - def _getCheckMedia(self):
    3698 """ 3699 Property target used to get the check media flag. 3700 """ 3701 return self._checkMedia
    3702
    3703 - def _setWarnMidnite(self, value):
    3704 """ 3705 Property target used to set the midnite warning flag. 3706 No validations, but we normalize the value to C{True} or C{False}. 3707 """ 3708 if value: 3709 self._warnMidnite = True 3710 else: 3711 self._warnMidnite = False
    3712
    3713 - def _getWarnMidnite(self):
    3714 """ 3715 Property target used to get the midnite warning flag. 3716 """ 3717 return self._warnMidnite
    3718
    3719 - def _setNoEject(self, value):
    3720 """ 3721 Property target used to set the no-eject flag. 3722 No validations, but we normalize the value to C{True} or C{False}. 3723 """ 3724 if value: 3725 self._noEject = True 3726 else: 3727 self._noEject = False
    3728
    3729 - def _getNoEject(self):
    3730 """ 3731 Property target used to get the no-eject flag. 3732 """ 3733 return self._noEject
    3734
    3735 - def _setBlankBehavior(self, value):
    3736 """ 3737 Property target used to set blanking behavior configuration. 3738 If not C{None}, the value must be a C{BlankBehavior} object. 3739 @raise ValueError: If the value is not a C{BlankBehavior} 3740 """ 3741 if value is None: 3742 self._blankBehavior = None 3743 else: 3744 if not isinstance(value, BlankBehavior): 3745 raise ValueError("Value must be a C{BlankBehavior} object.") 3746 self._blankBehavior = value
    3747
    3748 - def _getBlankBehavior(self):
    3749 """ 3750 Property target used to get the blanking behavior configuration. 3751 """ 3752 return self._blankBehavior
    3753
    3754 - def _setRefreshMediaDelay(self, value):
    3755 """ 3756 Property target used to set the refreshMediaDelay. 3757 The value must be an integer >= 0. 3758 @raise ValueError: If the value is not valid. 3759 """ 3760 if value is None: 3761 self._refreshMediaDelay = None 3762 else: 3763 try: 3764 value = int(value) 3765 except TypeError: 3766 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 3767 if value < 0: 3768 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 3769 if value == 0: 3770 value = None # normalize this out, since it's the default 3771 self._refreshMediaDelay = value
    3772
    3773 - def _getRefreshMediaDelay(self):
    3774 """ 3775 Property target used to get the action refreshMediaDelay. 3776 """ 3777 return self._refreshMediaDelay
    3778
    3779 - def _setEjectDelay(self, value):
    3780 """ 3781 Property target used to set the ejectDelay. 3782 The value must be an integer >= 0. 3783 @raise ValueError: If the value is not valid. 3784 """ 3785 if value is None: 3786 self._ejectDelay = None 3787 else: 3788 try: 3789 value = int(value) 3790 except TypeError: 3791 raise ValueError("Action ejectDelay value must be an integer >= 0.") 3792 if value < 0: 3793 raise ValueError("Action ejectDelay value must be an integer >= 0.") 3794 if value == 0: 3795 value = None # normalize this out, since it's the default 3796 self._ejectDelay = value
    3797
    3798 - def _getEjectDelay(self):
    3799 """ 3800 Property target used to get the action ejectDelay. 3801 """ 3802 return self._ejectDelay
    3803 3804 sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") 3805 mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") 3806 deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") 3807 devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") 3808 deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") 3809 driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") 3810 checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") 3811 checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") 3812 warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") 3813 noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") 3814 blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") 3815 refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") 3816 ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray")
    3817
    3818 3819 ######################################################################## 3820 # PurgeConfig class definition 3821 ######################################################################## 3822 3823 -class PurgeConfig(object):
    3824 3825 """ 3826 Class representing a Cedar Backup purge configuration. 3827 3828 The following restrictions exist on data in this class: 3829 3830 - The purge directory list must be a list of C{PurgeDir} objects. 3831 3832 For the C{purgeDirs} list, validation is accomplished through the 3833 L{util.ObjectTypeList} list implementation that overrides common list 3834 methods and transparently ensures that each element is a C{PurgeDir}. 3835 3836 @note: Lists within this class are "unordered" for equality comparisons. 3837 3838 @sort: __init__, __repr__, __str__, __cmp__, purgeDirs 3839 """ 3840
    3841 - def __init__(self, purgeDirs=None):
    3842 """ 3843 Constructor for the C{Purge} class. 3844 @param purgeDirs: List of purge directories. 3845 @raise ValueError: If one of the values is invalid. 3846 """ 3847 self._purgeDirs = None 3848 self.purgeDirs = purgeDirs
    3849
    3850 - def __repr__(self):
    3851 """ 3852 Official string representation for class instance. 3853 """ 3854 return "PurgeConfig(%s)" % self.purgeDirs
    3855
    3856 - def __str__(self):
    3857 """ 3858 Informal string representation for class instance. 3859 """ 3860 return self.__repr__()
    3861
    3862 - def __cmp__(self, other):
    3863 """ 3864 Definition of equals operator for this class. 3865 Lists within this class are "unordered" for equality comparisons. 3866 @param other: Other object to compare to. 3867 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3868 """ 3869 if other is None: 3870 return 1 3871 if self.purgeDirs != other.purgeDirs: 3872 if self.purgeDirs < other.purgeDirs: 3873 return -1 3874 else: 3875 return 1 3876 return 0
    3877
    3878 - def _setPurgeDirs(self, value):
    3879 """ 3880 Property target used to set the purge dirs list. 3881 Either the value must be C{None} or each element must be a C{PurgeDir}. 3882 @raise ValueError: If the value is not a C{PurgeDir} 3883 """ 3884 if value is None: 3885 self._purgeDirs = None 3886 else: 3887 try: 3888 saved = self._purgeDirs 3889 self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") 3890 self._purgeDirs.extend(value) 3891 except Exception, e: 3892 self._purgeDirs = saved 3893 raise e
    3894
    3895 - def _getPurgeDirs(self):
    3896 """ 3897 Property target used to get the purge dirs list. 3898 """ 3899 return self._purgeDirs
    3900 3901 purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.")
    3902
    3903 3904 ######################################################################## 3905 # Config class definition 3906 ######################################################################## 3907 3908 -class Config(object):
    3909 3910 ###################### 3911 # Class documentation 3912 ###################### 3913 3914 """ 3915 Class representing a Cedar Backup XML configuration document. 3916 3917 The C{Config} class is a Python object representation of a Cedar Backup XML 3918 configuration file. It is intended to be the only Python-language interface 3919 to Cedar Backup configuration on disk for both Cedar Backup itself and for 3920 external applications. 3921 3922 The object representation is two-way: XML data can be used to create a 3923 C{Config} object, and then changes to the object can be propogated back to 3924 disk. A C{Config} object can even be used to create a configuration file 3925 from scratch programmatically. 3926 3927 This class and the classes it is composed from often use Python's 3928 C{property} construct to validate input and limit access to values. Some 3929 validations can only be done once a document is considered "complete" 3930 (see module notes for more details). 3931 3932 Assignments to the various instance variables must match the expected 3933 type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check 3934 uses the built-in C{isinstance} function, so it should be OK to use 3935 subclasses if you want to. 3936 3937 If an instance variable is not set, its value will be C{None}. When an 3938 object is initialized without using an XML document, all of the values 3939 will be C{None}. Even when an object is initialized using XML, some of 3940 the values might be C{None} because not every section is required. 3941 3942 @note: Lists within this class are "unordered" for equality comparisons. 3943 3944 @sort: __init__, __repr__, __str__, __cmp__, extractXml, validate, 3945 reference, extensions, options, collect, stage, store, purge, 3946 _getReference, _setReference, _getExtensions, _setExtensions, 3947 _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, 3948 _setCollect, _getStage, _setStage, _getStore, _setStore, 3949 _getPurge, _setPurge 3950 """ 3951 3952 ############## 3953 # Constructor 3954 ############## 3955
    3956 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    3957 """ 3958 Initializes a configuration object. 3959 3960 If you initialize the object without passing either C{xmlData} or 3961 C{xmlPath}, then configuration will be empty and will be invalid until it 3962 is filled in properly. 3963 3964 No reference to the original XML data or original path is saved off by 3965 this class. Once the data has been parsed (successfully or not) this 3966 original information is discarded. 3967 3968 Unless the C{validate} argument is C{False}, the L{Config.validate} 3969 method will be called (with its default arguments) against configuration 3970 after successfully parsing any passed-in XML. Keep in mind that even if 3971 C{validate} is C{False}, it might not be possible to parse the passed-in 3972 XML document if lower-level validations fail. 3973 3974 @note: It is strongly suggested that the C{validate} option always be set 3975 to C{True} (the default) unless there is a specific need to read in 3976 invalid configuration from disk. 3977 3978 @param xmlData: XML data representing configuration. 3979 @type xmlData: String data. 3980 3981 @param xmlPath: Path to an XML file on disk. 3982 @type xmlPath: Absolute path to a file on disk. 3983 3984 @param validate: Validate the document after parsing it. 3985 @type validate: Boolean true/false. 3986 3987 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 3988 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 3989 @raise ValueError: If the parsed configuration document is not valid. 3990 """ 3991 self._reference = None 3992 self._extensions = None 3993 self._options = None 3994 self._peers = None 3995 self._collect = None 3996 self._stage = None 3997 self._store = None 3998 self._purge = None 3999 self.reference = None 4000 self.extensions = None 4001 self.options = None 4002 self.peers = None 4003 self.collect = None 4004 self.stage = None 4005 self.store = None 4006 self.purge = None 4007 if xmlData is not None and xmlPath is not None: 4008 raise ValueError("Use either xmlData or xmlPath, but not both.") 4009 if xmlData is not None: 4010 self._parseXmlData(xmlData) 4011 if validate: 4012 self.validate() 4013 elif xmlPath is not None: 4014 xmlData = open(xmlPath).read() 4015 self._parseXmlData(xmlData) 4016 if validate: 4017 self.validate()
    4018 4019 4020 ######################### 4021 # String representations 4022 ######################### 4023
    4024 - def __repr__(self):
    4025 """ 4026 Official string representation for class instance. 4027 """ 4028 return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, 4029 self.peers, self.collect, self.stage, self.store, 4030 self.purge)
    4031
    4032 - def __str__(self):
    4033 """ 4034 Informal string representation for class instance. 4035 """ 4036 return self.__repr__()
    4037 4038 4039 ############################# 4040 # Standard comparison method 4041 ############################# 4042
    4043 - def __cmp__(self, other):
    4044 """ 4045 Definition of equals operator for this class. 4046 Lists within this class are "unordered" for equality comparisons. 4047 @param other: Other object to compare to. 4048 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 4049 """ 4050 if other is None: 4051 return 1 4052 if self.reference != other.reference: 4053 if self.reference < other.reference: 4054 return -1 4055 else: 4056 return 1 4057 if self.extensions != other.extensions: 4058 if self.extensions < other.extensions: 4059 return -1 4060 else: 4061 return 1 4062 if self.options != other.options: 4063 if self.options < other.options: 4064 return -1 4065 else: 4066 return 1 4067 if self.peers != other.peers: 4068 if self.peers < other.peers: 4069 return -1 4070 else: 4071 return 1 4072 if self.collect != other.collect: 4073 if self.collect < other.collect: 4074 return -1 4075 else: 4076 return 1 4077 if self.stage != other.stage: 4078 if self.stage < other.stage: 4079 return -1 4080 else: 4081 return 1 4082 if self.store != other.store: 4083 if self.store < other.store: 4084 return -1 4085 else: 4086 return 1 4087 if self.purge != other.purge: 4088 if self.purge < other.purge: 4089 return -1 4090 else: 4091 return 1 4092 return 0
    4093 4094 4095 ############# 4096 # Properties 4097 ############# 4098
    4099 - def _setReference(self, value):
    4100 """ 4101 Property target used to set the reference configuration value. 4102 If not C{None}, the value must be a C{ReferenceConfig} object. 4103 @raise ValueError: If the value is not a C{ReferenceConfig} 4104 """ 4105 if value is None: 4106 self._reference = None 4107 else: 4108 if not isinstance(value, ReferenceConfig): 4109 raise ValueError("Value must be a C{ReferenceConfig} object.") 4110 self._reference = value
    4111
    4112 - def _getReference(self):
    4113 """ 4114 Property target used to get the reference configuration value. 4115 """ 4116 return self._reference
    4117
    4118 - def _setExtensions(self, value):
    4119 """ 4120 Property target used to set the extensions configuration value. 4121 If not C{None}, the value must be a C{ExtensionsConfig} object. 4122 @raise ValueError: If the value is not a C{ExtensionsConfig} 4123 """ 4124 if value is None: 4125 self._extensions = None 4126 else: 4127 if not isinstance(value, ExtensionsConfig): 4128 raise ValueError("Value must be a C{ExtensionsConfig} object.") 4129 self._extensions = value
    4130
    4131 - def _getExtensions(self):
    4132 """ 4133 Property target used to get the extensions configuration value. 4134 """ 4135 return self._extensions
    4136
    4137 - def _setOptions(self, value):
    4138 """ 4139 Property target used to set the options configuration value. 4140 If not C{None}, the value must be an C{OptionsConfig} object. 4141 @raise ValueError: If the value is not a C{OptionsConfig} 4142 """ 4143 if value is None: 4144 self._options = None 4145 else: 4146 if not isinstance(value, OptionsConfig): 4147 raise ValueError("Value must be a C{OptionsConfig} object.") 4148 self._options = value
    4149
    4150 - def _getOptions(self):
    4151 """ 4152 Property target used to get the options configuration value. 4153 """ 4154 return self._options
    4155
    4156 - def _setPeers(self, value):
    4157 """ 4158 Property target used to set the peers configuration value. 4159 If not C{None}, the value must be an C{PeersConfig} object. 4160 @raise ValueError: If the value is not a C{PeersConfig} 4161 """ 4162 if value is None: 4163 self._peers = None 4164 else: 4165 if not isinstance(value, PeersConfig): 4166 raise ValueError("Value must be a C{PeersConfig} object.") 4167 self._peers = value
    4168
    4169 - def _getPeers(self):
    4170 """ 4171 Property target used to get the peers configuration value. 4172 """ 4173 return self._peers
    4174
    4175 - def _setCollect(self, value):
    4176 """ 4177 Property target used to set the collect configuration value. 4178 If not C{None}, the value must be a C{CollectConfig} object. 4179 @raise ValueError: If the value is not a C{CollectConfig} 4180 """ 4181 if value is None: 4182 self._collect = None 4183 else: 4184 if not isinstance(value, CollectConfig): 4185 raise ValueError("Value must be a C{CollectConfig} object.") 4186 self._collect = value
    4187
    4188 - def _getCollect(self):
    4189 """ 4190 Property target used to get the collect configuration value. 4191 """ 4192 return self._collect
    4193
    4194 - def _setStage(self, value):
    4195 """ 4196 Property target used to set the stage configuration value. 4197 If not C{None}, the value must be a C{StageConfig} object. 4198 @raise ValueError: If the value is not a C{StageConfig} 4199 """ 4200 if value is None: 4201 self._stage = None 4202 else: 4203 if not isinstance(value, StageConfig): 4204 raise ValueError("Value must be a C{StageConfig} object.") 4205 self._stage = value
    4206
    4207 - def _getStage(self):
    4208 """ 4209 Property target used to get the stage configuration value. 4210 """ 4211 return self._stage
    4212
    4213 - def _setStore(self, value):
    4214 """ 4215 Property target used to set the store configuration value. 4216 If not C{None}, the value must be a C{StoreConfig} object. 4217 @raise ValueError: If the value is not a C{StoreConfig} 4218 """ 4219 if value is None: 4220 self._store = None 4221 else: 4222 if not isinstance(value, StoreConfig): 4223 raise ValueError("Value must be a C{StoreConfig} object.") 4224 self._store = value
    4225
    4226 - def _getStore(self):
    4227 """ 4228 Property target used to get the store configuration value. 4229 """ 4230 return self._store
    4231
    4232 - def _setPurge(self, value):
    4233 """ 4234 Property target used to set the purge configuration value. 4235 If not C{None}, the value must be a C{PurgeConfig} object. 4236 @raise ValueError: If the value is not a C{PurgeConfig} 4237 """ 4238 if value is None: 4239 self._purge = None 4240 else: 4241 if not isinstance(value, PurgeConfig): 4242 raise ValueError("Value must be a C{PurgeConfig} object.") 4243 self._purge = value
    4244
    4245 - def _getPurge(self):
    4246 """ 4247 Property target used to get the purge configuration value. 4248 """ 4249 return self._purge
    4250 4251 reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") 4252 extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") 4253 options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") 4254 peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") 4255 collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") 4256 stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") 4257 store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") 4258 purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") 4259 4260 4261 ################# 4262 # Public methods 4263 ################# 4264
    4265 - def extractXml(self, xmlPath=None, validate=True):
    4266 """ 4267 Extracts configuration into an XML document. 4268 4269 If C{xmlPath} is not provided, then the XML document will be returned as 4270 a string. If C{xmlPath} is provided, then the XML document will be written 4271 to the file and C{None} will be returned. 4272 4273 Unless the C{validate} parameter is C{False}, the L{Config.validate} 4274 method will be called (with its default arguments) against the 4275 configuration before extracting the XML. If configuration is not valid, 4276 then an XML document will not be extracted. 4277 4278 @note: It is strongly suggested that the C{validate} option always be set 4279 to C{True} (the default) unless there is a specific need to write an 4280 invalid configuration file to disk. 4281 4282 @param xmlPath: Path to an XML file to create on disk. 4283 @type xmlPath: Absolute path to a file. 4284 4285 @param validate: Validate the document before extracting it. 4286 @type validate: Boolean true/false. 4287 4288 @return: XML string data or C{None} as described above. 4289 4290 @raise ValueError: If configuration within the object is not valid. 4291 @raise IOError: If there is an error writing to the file. 4292 @raise OSError: If there is an error writing to the file. 4293 """ 4294 if validate: 4295 self.validate() 4296 xmlData = self._extractXml() 4297 if xmlPath is not None: 4298 open(xmlPath, "w").write(xmlData) 4299 return None 4300 else: 4301 return xmlData
    4302
    4303 - def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, 4304 requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False):
    4305 """ 4306 Validates configuration represented by the object. 4307 4308 This method encapsulates all of the validations that should apply to a 4309 fully "complete" document but are not already taken care of by earlier 4310 validations. It also provides some extra convenience functionality which 4311 might be useful to some people. The process of validation is laid out in 4312 the I{Validation} section in the class notes (above). 4313 4314 @param requireOneAction: Require at least one of the collect, stage, store or purge sections. 4315 @param requireReference: Require the reference section. 4316 @param requireExtensions: Require the extensions section. 4317 @param requireOptions: Require the options section. 4318 @param requirePeers: Require the peers section. 4319 @param requireCollect: Require the collect section. 4320 @param requireStage: Require the stage section. 4321 @param requireStore: Require the store section. 4322 @param requirePurge: Require the purge section. 4323 4324 @raise ValueError: If one of the validations fails. 4325 """ 4326 if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): 4327 raise ValueError("At least one of the collect, stage, store and purge sections is required.") 4328 if requireReference and self.reference is None: 4329 raise ValueError("The reference is section is required.") 4330 if requireExtensions and self.extensions is None: 4331 raise ValueError("The extensions is section is required.") 4332 if requireOptions and self.options is None: 4333 raise ValueError("The options is section is required.") 4334 if requirePeers and self.peers is None: 4335 raise ValueError("The peers is section is required.") 4336 if requireCollect and self.collect is None: 4337 raise ValueError("The collect is section is required.") 4338 if requireStage and self.stage is None: 4339 raise ValueError("The stage is section is required.") 4340 if requireStore and self.store is None: 4341 raise ValueError("The store is section is required.") 4342 if requirePurge and self.purge is None: 4343 raise ValueError("The purge is section is required.") 4344 self._validateContents()
    4345 4346 4347 ##################################### 4348 # High-level methods for parsing XML 4349 ##################################### 4350
    4351 - def _parseXmlData(self, xmlData):
    4352 """ 4353 Internal method to parse an XML string into the object. 4354 4355 This method parses the XML document into a DOM tree (C{xmlDom}) and then 4356 calls individual static methods to parse each of the individual 4357 configuration sections. 4358 4359 Most of the validation we do here has to do with whether the document can 4360 be parsed and whether any values which exist are valid. We don't do much 4361 validation as to whether required elements actually exist unless we have 4362 to to make sense of the document (instead, that's the job of the 4363 L{validate} method). 4364 4365 @param xmlData: XML data to be parsed 4366 @type xmlData: String data 4367 4368 @raise ValueError: If the XML cannot be successfully parsed. 4369 """ 4370 (xmlDom, parentNode) = createInputDom(xmlData) 4371 self._reference = Config._parseReference(parentNode) 4372 self._extensions = Config._parseExtensions(parentNode) 4373 self._options = Config._parseOptions(parentNode) 4374 self._peers = Config._parsePeers(parentNode) 4375 self._collect = Config._parseCollect(parentNode) 4376 self._stage = Config._parseStage(parentNode) 4377 self._store = Config._parseStore(parentNode) 4378 self._purge = Config._parsePurge(parentNode)
    4379 4380 @staticmethod
    4381 - def _parseReference(parentNode):
    4382 """ 4383 Parses a reference configuration section. 4384 4385 We read the following fields:: 4386 4387 author //cb_config/reference/author 4388 revision //cb_config/reference/revision 4389 description //cb_config/reference/description 4390 generator //cb_config/reference/generator 4391 4392 @param parentNode: Parent node to search beneath. 4393 4394 @return: C{ReferenceConfig} object or C{None} if the section does not exist. 4395 @raise ValueError: If some filled-in value is invalid. 4396 """ 4397 reference = None 4398 sectionNode = readFirstChild(parentNode, "reference") 4399 if sectionNode is not None: 4400 reference = ReferenceConfig() 4401 reference.author = readString(sectionNode, "author") 4402 reference.revision = readString(sectionNode, "revision") 4403 reference.description = readString(sectionNode, "description") 4404 reference.generator = readString(sectionNode, "generator") 4405 return reference
    4406 4407 @staticmethod
    4408 - def _parseExtensions(parentNode):
    4409 """ 4410 Parses an extensions configuration section. 4411 4412 We read the following fields:: 4413 4414 orderMode //cb_config/extensions/order_mode 4415 4416 We also read groups of the following items, one list element per item:: 4417 4418 name //cb_config/extensions/action/name 4419 module //cb_config/extensions/action/module 4420 function //cb_config/extensions/action/function 4421 index //cb_config/extensions/action/index 4422 dependencies //cb_config/extensions/action/depends 4423 4424 The extended actions are parsed by L{_parseExtendedActions}. 4425 4426 @param parentNode: Parent node to search beneath. 4427 4428 @return: C{ExtensionsConfig} object or C{None} if the section does not exist. 4429 @raise ValueError: If some filled-in value is invalid. 4430 """ 4431 extensions = None 4432 sectionNode = readFirstChild(parentNode, "extensions") 4433 if sectionNode is not None: 4434 extensions = ExtensionsConfig() 4435 extensions.orderMode = readString(sectionNode, "order_mode") 4436 extensions.actions = Config._parseExtendedActions(sectionNode) 4437 return extensions
    4438 4439 @staticmethod
    4440 - def _parseOptions(parentNode):
    4441 """ 4442 Parses a options configuration section. 4443 4444 We read the following fields:: 4445 4446 startingDay //cb_config/options/starting_day 4447 workingDir //cb_config/options/working_dir 4448 backupUser //cb_config/options/backup_user 4449 backupGroup //cb_config/options/backup_group 4450 rcpCommand //cb_config/options/rcp_command 4451 rshCommand //cb_config/options/rsh_command 4452 cbackCommand //cb_config/options/cback_command 4453 managedActions //cb_config/options/managed_actions 4454 4455 The list of managed actions is a comma-separated list of action names. 4456 4457 We also read groups of the following items, one list element per 4458 item:: 4459 4460 overrides //cb_config/options/override 4461 hooks //cb_config/options/hook 4462 4463 The overrides are parsed by L{_parseOverrides} and the hooks are parsed 4464 by L{_parseHooks}. 4465 4466 @param parentNode: Parent node to search beneath. 4467 4468 @return: C{OptionsConfig} object or C{None} if the section does not exist. 4469 @raise ValueError: If some filled-in value is invalid. 4470 """ 4471 options = None 4472 sectionNode = readFirstChild(parentNode, "options") 4473 if sectionNode is not None: 4474 options = OptionsConfig() 4475 options.startingDay = readString(sectionNode, "starting_day") 4476 options.workingDir = readString(sectionNode, "working_dir") 4477 options.backupUser = readString(sectionNode, "backup_user") 4478 options.backupGroup = readString(sectionNode, "backup_group") 4479 options.rcpCommand = readString(sectionNode, "rcp_command") 4480 options.rshCommand = readString(sectionNode, "rsh_command") 4481 options.cbackCommand = readString(sectionNode, "cback_command") 4482 options.overrides = Config._parseOverrides(sectionNode) 4483 options.hooks = Config._parseHooks(sectionNode) 4484 managedActions = readString(sectionNode, "managed_actions") 4485 options.managedActions = parseCommaSeparatedString(managedActions) 4486 return options
    4487 4488 @staticmethod
    4489 - def _parsePeers(parentNode):
    4490 """ 4491 Parses a peers configuration section. 4492 4493 We read groups of the following items, one list element per 4494 item:: 4495 4496 localPeers //cb_config/stage/peer 4497 remotePeers //cb_config/stage/peer 4498 4499 The individual peer entries are parsed by L{_parsePeerList}. 4500 4501 @param parentNode: Parent node to search beneath. 4502 4503 @return: C{StageConfig} object or C{None} if the section does not exist. 4504 @raise ValueError: If some filled-in value is invalid. 4505 """ 4506 peers = None 4507 sectionNode = readFirstChild(parentNode, "peers") 4508 if sectionNode is not None: 4509 peers = PeersConfig() 4510 (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) 4511 return peers
    4512 4513 @staticmethod
    4514 - def _parseCollect(parentNode):
    4515 """ 4516 Parses a collect configuration section. 4517 4518 We read the following individual fields:: 4519 4520 targetDir //cb_config/collect/collect_dir 4521 collectMode //cb_config/collect/collect_mode 4522 archiveMode //cb_config/collect/archive_mode 4523 ignoreFile //cb_config/collect/ignore_file 4524 4525 We also read groups of the following items, one list element per 4526 item:: 4527 4528 absoluteExcludePaths //cb_config/collect/exclude/abs_path 4529 excludePatterns //cb_config/collect/exclude/pattern 4530 collectFiles //cb_config/collect/file 4531 collectDirs //cb_config/collect/dir 4532 4533 The exclusions are parsed by L{_parseExclusions}, the collect files are 4534 parsed by L{_parseCollectFiles}, and the directories are parsed by 4535 L{_parseCollectDirs}. 4536 4537 @param parentNode: Parent node to search beneath. 4538 4539 @return: C{CollectConfig} object or C{None} if the section does not exist. 4540 @raise ValueError: If some filled-in value is invalid. 4541 """ 4542 collect = None 4543 sectionNode = readFirstChild(parentNode, "collect") 4544 if sectionNode is not None: 4545 collect = CollectConfig() 4546 collect.targetDir = readString(sectionNode, "collect_dir") 4547 collect.collectMode = readString(sectionNode, "collect_mode") 4548 collect.archiveMode = readString(sectionNode, "archive_mode") 4549 collect.ignoreFile = readString(sectionNode, "ignore_file") 4550 (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) 4551 collect.collectFiles = Config._parseCollectFiles(sectionNode) 4552 collect.collectDirs = Config._parseCollectDirs(sectionNode) 4553 return collect
    4554 4555 @staticmethod
    4556 - def _parseStage(parentNode):
    4557 """ 4558 Parses a stage configuration section. 4559 4560 We read the following individual fields:: 4561 4562 targetDir //cb_config/stage/staging_dir 4563 4564 We also read groups of the following items, one list element per 4565 item:: 4566 4567 localPeers //cb_config/stage/peer 4568 remotePeers //cb_config/stage/peer 4569 4570 The individual peer entries are parsed by L{_parsePeerList}. 4571 4572 @param parentNode: Parent node to search beneath. 4573 4574 @return: C{StageConfig} object or C{None} if the section does not exist. 4575 @raise ValueError: If some filled-in value is invalid. 4576 """ 4577 stage = None 4578 sectionNode = readFirstChild(parentNode, "stage") 4579 if sectionNode is not None: 4580 stage = StageConfig() 4581 stage.targetDir = readString(sectionNode, "staging_dir") 4582 (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) 4583 return stage
    4584 4585 @staticmethod
    4586 - def _parseStore(parentNode):
    4587 """ 4588 Parses a store configuration section. 4589 4590 We read the following fields:: 4591 4592 sourceDir //cb_config/store/source_dir 4593 mediaType //cb_config/store/media_type 4594 deviceType //cb_config/store/device_type 4595 devicePath //cb_config/store/target_device 4596 deviceScsiId //cb_config/store/target_scsi_id 4597 driveSpeed //cb_config/store/drive_speed 4598 checkData //cb_config/store/check_data 4599 checkMedia //cb_config/store/check_media 4600 warnMidnite //cb_config/store/warn_midnite 4601 noEject //cb_config/store/no_eject 4602 4603 Blanking behavior configuration is parsed by the C{_parseBlankBehavior} 4604 method. 4605 4606 @param parentNode: Parent node to search beneath. 4607 4608 @return: C{StoreConfig} object or C{None} if the section does not exist. 4609 @raise ValueError: If some filled-in value is invalid. 4610 """ 4611 store = None 4612 sectionNode = readFirstChild(parentNode, "store") 4613 if sectionNode is not None: 4614 store = StoreConfig() 4615 store.sourceDir = readString(sectionNode, "source_dir") 4616 store.mediaType = readString(sectionNode, "media_type") 4617 store.deviceType = readString(sectionNode, "device_type") 4618 store.devicePath = readString(sectionNode, "target_device") 4619 store.deviceScsiId = readString(sectionNode, "target_scsi_id") 4620 store.driveSpeed = readInteger(sectionNode, "drive_speed") 4621 store.checkData = readBoolean(sectionNode, "check_data") 4622 store.checkMedia = readBoolean(sectionNode, "check_media") 4623 store.warnMidnite = readBoolean(sectionNode, "warn_midnite") 4624 store.noEject = readBoolean(sectionNode, "no_eject") 4625 store.blankBehavior = Config._parseBlankBehavior(sectionNode) 4626 store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") 4627 store.ejectDelay = readInteger(sectionNode, "eject_delay") 4628 return store
    4629 4630 @staticmethod
    4631 - def _parsePurge(parentNode):
    4632 """ 4633 Parses a purge configuration section. 4634 4635 We read groups of the following items, one list element per 4636 item:: 4637 4638 purgeDirs //cb_config/purge/dir 4639 4640 The individual directory entries are parsed by L{_parsePurgeDirs}. 4641 4642 @param parentNode: Parent node to search beneath. 4643 4644 @return: C{PurgeConfig} object or C{None} if the section does not exist. 4645 @raise ValueError: If some filled-in value is invalid. 4646 """ 4647 purge = None 4648 sectionNode = readFirstChild(parentNode, "purge") 4649 if sectionNode is not None: 4650 purge = PurgeConfig() 4651 purge.purgeDirs = Config._parsePurgeDirs(sectionNode) 4652 return purge
    4653 4654 @staticmethod
    4655 - def _parseExtendedActions(parentNode):
    4656 """ 4657 Reads extended actions data from immediately beneath the parent. 4658 4659 We read the following individual fields from each extended action:: 4660 4661 name name 4662 module module 4663 function function 4664 index index 4665 dependencies depends 4666 4667 Dependency information is parsed by the C{_parseDependencies} method. 4668 4669 @param parentNode: Parent node to search beneath. 4670 4671 @return: List of extended actions. 4672 @raise ValueError: If the data at the location can't be read 4673 """ 4674 lst = [] 4675 for entry in readChildren(parentNode, "action"): 4676 if isElement(entry): 4677 action = ExtendedAction() 4678 action.name = readString(entry, "name") 4679 action.module = readString(entry, "module") 4680 action.function = readString(entry, "function") 4681 action.index = readInteger(entry, "index") 4682 action.dependencies = Config._parseDependencies(entry) 4683 lst.append(action) 4684 if lst == []: 4685 lst = None 4686 return lst
    4687 4688 @staticmethod
    4689 - def _parseExclusions(parentNode):
    4690 """ 4691 Reads exclusions data from immediately beneath the parent. 4692 4693 We read groups of the following items, one list element per item:: 4694 4695 absolute exclude/abs_path 4696 relative exclude/rel_path 4697 patterns exclude/pattern 4698 4699 If there are none of some pattern (i.e. no relative path items) then 4700 C{None} will be returned for that item in the tuple. 4701 4702 This method can be used to parse exclusions on both the collect 4703 configuration level and on the collect directory level within collect 4704 configuration. 4705 4706 @param parentNode: Parent node to search beneath. 4707 4708 @return: Tuple of (absolute, relative, patterns) exclusions. 4709 """ 4710 sectionNode = readFirstChild(parentNode, "exclude") 4711 if sectionNode is None: 4712 return (None, None, None) 4713 else: 4714 absolute = readStringList(sectionNode, "abs_path") 4715 relative = readStringList(sectionNode, "rel_path") 4716 patterns = readStringList(sectionNode, "pattern") 4717 return (absolute, relative, patterns)
    4718 4719 @staticmethod
    4720 - def _parseOverrides(parentNode):
    4721 """ 4722 Reads a list of C{CommandOverride} objects from immediately beneath the parent. 4723 4724 We read the following individual fields:: 4725 4726 command command 4727 absolutePath abs_path 4728 4729 @param parentNode: Parent node to search beneath. 4730 4731 @return: List of C{CommandOverride} objects or C{None} if none are found. 4732 @raise ValueError: If some filled-in value is invalid. 4733 """ 4734 lst = [] 4735 for entry in readChildren(parentNode, "override"): 4736 if isElement(entry): 4737 override = CommandOverride() 4738 override.command = readString(entry, "command") 4739 override.absolutePath = readString(entry, "abs_path") 4740 lst.append(override) 4741 if lst == []: 4742 lst = None 4743 return lst
    4744 4745 @staticmethod
    4746 - def _parseHooks(parentNode):
    4747 """ 4748 Reads a list of C{ActionHook} objects from immediately beneath the parent. 4749 4750 We read the following individual fields:: 4751 4752 action action 4753 command command 4754 4755 @param parentNode: Parent node to search beneath. 4756 4757 @return: List of C{ActionHook} objects or C{None} if none are found. 4758 @raise ValueError: If some filled-in value is invalid. 4759 """ 4760 lst = [] 4761 for entry in readChildren(parentNode, "pre_action_hook"): 4762 if isElement(entry): 4763 hook = PreActionHook() 4764 hook.action = readString(entry, "action") 4765 hook.command = readString(entry, "command") 4766 lst.append(hook) 4767 for entry in readChildren(parentNode, "post_action_hook"): 4768 if isElement(entry): 4769 hook = PostActionHook() 4770 hook.action = readString(entry, "action") 4771 hook.command = readString(entry, "command") 4772 lst.append(hook) 4773 if lst == []: 4774 lst = None 4775 return lst
    4776 4777 @staticmethod
    4778 - def _parseCollectFiles(parentNode):
    4779 """ 4780 Reads a list of C{CollectFile} objects from immediately beneath the parent. 4781 4782 We read the following individual fields:: 4783 4784 absolutePath abs_path 4785 collectMode mode I{or} collect_mode 4786 archiveMode archive_mode 4787 4788 The collect mode is a special case. Just a C{mode} tag is accepted, but 4789 we prefer C{collect_mode} for consistency with the rest of the config 4790 file and to avoid confusion with the archive mode. If both are provided, 4791 only C{mode} will be used. 4792 4793 @param parentNode: Parent node to search beneath. 4794 4795 @return: List of C{CollectFile} objects or C{None} if none are found. 4796 @raise ValueError: If some filled-in value is invalid. 4797 """ 4798 lst = [] 4799 for entry in readChildren(parentNode, "file"): 4800 if isElement(entry): 4801 cfile = CollectFile() 4802 cfile.absolutePath = readString(entry, "abs_path") 4803 cfile.collectMode = readString(entry, "mode") 4804 if cfile.collectMode is None: 4805 cfile.collectMode = readString(entry, "collect_mode") 4806 cfile.archiveMode = readString(entry, "archive_mode") 4807 lst.append(cfile) 4808 if lst == []: 4809 lst = None 4810 return lst
    4811 4812 @staticmethod
    4813 - def _parseCollectDirs(parentNode):
    4814 """ 4815 Reads a list of C{CollectDir} objects from immediately beneath the parent. 4816 4817 We read the following individual fields:: 4818 4819 absolutePath abs_path 4820 collectMode mode I{or} collect_mode 4821 archiveMode archive_mode 4822 ignoreFile ignore_file 4823 linkDepth link_depth 4824 dereference dereference 4825 recursionLevel recursion_level 4826 4827 The collect mode is a special case. Just a C{mode} tag is accepted for 4828 backwards compatibility, but we prefer C{collect_mode} for consistency 4829 with the rest of the config file and to avoid confusion with the archive 4830 mode. If both are provided, only C{mode} will be used. 4831 4832 We also read groups of the following items, one list element per 4833 item:: 4834 4835 absoluteExcludePaths exclude/abs_path 4836 relativeExcludePaths exclude/rel_path 4837 excludePatterns exclude/pattern 4838 4839 The exclusions are parsed by L{_parseExclusions}. 4840 4841 @param parentNode: Parent node to search beneath. 4842 4843 @return: List of C{CollectDir} objects or C{None} if none are found. 4844 @raise ValueError: If some filled-in value is invalid. 4845 """ 4846 lst = [] 4847 for entry in readChildren(parentNode, "dir"): 4848 if isElement(entry): 4849 cdir = CollectDir() 4850 cdir.absolutePath = readString(entry, "abs_path") 4851 cdir.collectMode = readString(entry, "mode") 4852 if cdir.collectMode is None: 4853 cdir.collectMode = readString(entry, "collect_mode") 4854 cdir.archiveMode = readString(entry, "archive_mode") 4855 cdir.ignoreFile = readString(entry, "ignore_file") 4856 cdir.linkDepth = readInteger(entry, "link_depth") 4857 cdir.dereference = readBoolean(entry, "dereference") 4858 cdir.recursionLevel = readInteger(entry, "recursion_level") 4859 (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) 4860 lst.append(cdir) 4861 if lst == []: 4862 lst = None 4863 return lst
    4864 4865 @staticmethod
    4866 - def _parsePurgeDirs(parentNode):
    4867 """ 4868 Reads a list of C{PurgeDir} objects from immediately beneath the parent. 4869 4870 We read the following individual fields:: 4871 4872 absolutePath <baseExpr>/abs_path 4873 retainDays <baseExpr>/retain_days 4874 4875 @param parentNode: Parent node to search beneath. 4876 4877 @return: List of C{PurgeDir} objects or C{None} if none are found. 4878 @raise ValueError: If the data at the location can't be read 4879 """ 4880 lst = [] 4881 for entry in readChildren(parentNode, "dir"): 4882 if isElement(entry): 4883 cdir = PurgeDir() 4884 cdir.absolutePath = readString(entry, "abs_path") 4885 cdir.retainDays = readInteger(entry, "retain_days") 4886 lst.append(cdir) 4887 if lst == []: 4888 lst = None 4889 return lst
    4890 4891 @staticmethod
    4892 - def _parsePeerList(parentNode):
    4893 """ 4894 Reads remote and local peer data from immediately beneath the parent. 4895 4896 We read the following individual fields for both remote 4897 and local peers:: 4898 4899 name name 4900 collectDir collect_dir 4901 4902 We also read the following individual fields for remote peers 4903 only:: 4904 4905 remoteUser backup_user 4906 rcpCommand rcp_command 4907 rshCommand rsh_command 4908 cbackCommand cback_command 4909 managed managed 4910 managedActions managed_actions 4911 4912 Additionally, the value in the C{type} field is used to determine whether 4913 this entry is a remote peer. If the type is C{"remote"}, it's a remote 4914 peer, and if the type is C{"local"}, it's a remote peer. 4915 4916 If there are none of one type of peer (i.e. no local peers) then C{None} 4917 will be returned for that item in the tuple. 4918 4919 @param parentNode: Parent node to search beneath. 4920 4921 @return: Tuple of (local, remote) peer lists. 4922 @raise ValueError: If the data at the location can't be read 4923 """ 4924 localPeers = [] 4925 remotePeers = [] 4926 for entry in readChildren(parentNode, "peer"): 4927 if isElement(entry): 4928 peerType = readString(entry, "type") 4929 if peerType == "local": 4930 localPeer = LocalPeer() 4931 localPeer.name = readString(entry, "name") 4932 localPeer.collectDir = readString(entry, "collect_dir") 4933 localPeer.ignoreFailureMode = readString(entry, "ignore_failures") 4934 localPeers.append(localPeer) 4935 elif peerType == "remote": 4936 remotePeer = RemotePeer() 4937 remotePeer.name = readString(entry, "name") 4938 remotePeer.collectDir = readString(entry, "collect_dir") 4939 remotePeer.remoteUser = readString(entry, "backup_user") 4940 remotePeer.rcpCommand = readString(entry, "rcp_command") 4941 remotePeer.rshCommand = readString(entry, "rsh_command") 4942 remotePeer.cbackCommand = readString(entry, "cback_command") 4943 remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") 4944 remotePeer.managed = readBoolean(entry, "managed") 4945 managedActions = readString(entry, "managed_actions") 4946 remotePeer.managedActions = parseCommaSeparatedString(managedActions) 4947 remotePeers.append(remotePeer) 4948 if localPeers == []: 4949 localPeers = None 4950 if remotePeers == []: 4951 remotePeers = None 4952 return (localPeers, remotePeers)
    4953 4954 @staticmethod
    4955 - def _parseDependencies(parentNode):
    4956 """ 4957 Reads extended action dependency information from a parent node. 4958 4959 We read the following individual fields:: 4960 4961 runBefore depends/run_before 4962 runAfter depends/run_after 4963 4964 Each of these fields is a comma-separated list of action names. 4965 4966 The result is placed into an C{ActionDependencies} object. 4967 4968 If the dependencies parent node does not exist, C{None} will be returned. 4969 Otherwise, an C{ActionDependencies} object will always be created, even 4970 if it does not contain any actual dependencies in it. 4971 4972 @param parentNode: Parent node to search beneath. 4973 4974 @return: C{ActionDependencies} object or C{None}. 4975 @raise ValueError: If the data at the location can't be read 4976 """ 4977 sectionNode = readFirstChild(parentNode, "depends") 4978 if sectionNode is None: 4979 return None 4980 else: 4981 runBefore = readString(sectionNode, "run_before") 4982 runAfter = readString(sectionNode, "run_after") 4983 beforeList = parseCommaSeparatedString(runBefore) 4984 afterList = parseCommaSeparatedString(runAfter) 4985 return ActionDependencies(beforeList, afterList)
    4986 4987 @staticmethod
    4988 - def _parseBlankBehavior(parentNode):
    4989 """ 4990 Reads a single C{BlankBehavior} object from immediately beneath the parent. 4991 4992 We read the following individual fields:: 4993 4994 blankMode blank_behavior/mode 4995 blankFactor blank_behavior/factor 4996 4997 @param parentNode: Parent node to search beneath. 4998 4999 @return: C{BlankBehavior} object or C{None} if none if the section is not found 5000 @raise ValueError: If some filled-in value is invalid. 5001 """ 5002 blankBehavior = None 5003 sectionNode = readFirstChild(parentNode, "blank_behavior") 5004 if sectionNode is not None: 5005 blankBehavior = BlankBehavior() 5006 blankBehavior.blankMode = readString(sectionNode, "mode") 5007 blankBehavior.blankFactor = readString(sectionNode, "factor") 5008 return blankBehavior
    5009 5010 5011 ######################################## 5012 # High-level methods for generating XML 5013 ######################################## 5014
    5015 - def _extractXml(self):
    5016 """ 5017 Internal method to extract configuration into an XML string. 5018 5019 This method assumes that the internal L{validate} method has been called 5020 prior to extracting the XML, if the caller cares. No validation will be 5021 done internally. 5022 5023 As a general rule, fields that are set to C{None} will be extracted into 5024 the document as empty tags. The same goes for container tags that are 5025 filled based on lists - if the list is empty or C{None}, the container 5026 tag will be empty. 5027 """ 5028 (xmlDom, parentNode) = createOutputDom() 5029 Config._addReference(xmlDom, parentNode, self.reference) 5030 Config._addExtensions(xmlDom, parentNode, self.extensions) 5031 Config._addOptions(xmlDom, parentNode, self.options) 5032 Config._addPeers(xmlDom, parentNode, self.peers) 5033 Config._addCollect(xmlDom, parentNode, self.collect) 5034 Config._addStage(xmlDom, parentNode, self.stage) 5035 Config._addStore(xmlDom, parentNode, self.store) 5036 Config._addPurge(xmlDom, parentNode, self.purge) 5037 xmlData = serializeDom(xmlDom) 5038 xmlDom.unlink() 5039 return xmlData
    5040 5041 @staticmethod
    5042 - def _addReference(xmlDom, parentNode, referenceConfig):
    5043 """ 5044 Adds a <reference> configuration section as the next child of a parent. 5045 5046 We add the following fields to the document:: 5047 5048 author //cb_config/reference/author 5049 revision //cb_config/reference/revision 5050 description //cb_config/reference/description 5051 generator //cb_config/reference/generator 5052 5053 If C{referenceConfig} is C{None}, then no container will be added. 5054 5055 @param xmlDom: DOM tree as from L{createOutputDom}. 5056 @param parentNode: Parent that the section should be appended to. 5057 @param referenceConfig: Reference configuration section to be added to the document. 5058 """ 5059 if referenceConfig is not None: 5060 sectionNode = addContainerNode(xmlDom, parentNode, "reference") 5061 addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) 5062 addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) 5063 addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) 5064 addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator)
    5065 5066 @staticmethod
    5067 - def _addExtensions(xmlDom, parentNode, extensionsConfig):
    5068 """ 5069 Adds an <extensions> configuration section as the next child of a parent. 5070 5071 We add the following fields to the document:: 5072 5073 order_mode //cb_config/extensions/order_mode 5074 5075 We also add groups of the following items, one list element per item:: 5076 5077 actions //cb_config/extensions/action 5078 5079 The extended action entries are added by L{_addExtendedAction}. 5080 5081 If C{extensionsConfig} is C{None}, then no container will be added. 5082 5083 @param xmlDom: DOM tree as from L{createOutputDom}. 5084 @param parentNode: Parent that the section should be appended to. 5085 @param extensionsConfig: Extensions configuration section to be added to the document. 5086 """ 5087 if extensionsConfig is not None: 5088 sectionNode = addContainerNode(xmlDom, parentNode, "extensions") 5089 addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) 5090 if extensionsConfig.actions is not None: 5091 for action in extensionsConfig.actions: 5092 Config._addExtendedAction(xmlDom, sectionNode, action)
    5093 5094 @staticmethod
    5095 - def _addOptions(xmlDom, parentNode, optionsConfig):
    5096 """ 5097 Adds a <options> configuration section as the next child of a parent. 5098 5099 We add the following fields to the document:: 5100 5101 startingDay //cb_config/options/starting_day 5102 workingDir //cb_config/options/working_dir 5103 backupUser //cb_config/options/backup_user 5104 backupGroup //cb_config/options/backup_group 5105 rcpCommand //cb_config/options/rcp_command 5106 rshCommand //cb_config/options/rsh_command 5107 cbackCommand //cb_config/options/cback_command 5108 managedActions //cb_config/options/managed_actions 5109 5110 We also add groups of the following items, one list element per 5111 item:: 5112 5113 overrides //cb_config/options/override 5114 hooks //cb_config/options/pre_action_hook 5115 hooks //cb_config/options/post_action_hook 5116 5117 The individual override items are added by L{_addOverride}. The 5118 individual hook items are added by L{_addHook}. 5119 5120 If C{optionsConfig} is C{None}, then no container will be added. 5121 5122 @param xmlDom: DOM tree as from L{createOutputDom}. 5123 @param parentNode: Parent that the section should be appended to. 5124 @param optionsConfig: Options configuration section to be added to the document. 5125 """ 5126 if optionsConfig is not None: 5127 sectionNode = addContainerNode(xmlDom, parentNode, "options") 5128 addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) 5129 addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) 5130 addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) 5131 addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) 5132 addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) 5133 addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) 5134 addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) 5135 managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) 5136 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) 5137 if optionsConfig.overrides is not None: 5138 for override in optionsConfig.overrides: 5139 Config._addOverride(xmlDom, sectionNode, override) 5140 if optionsConfig.hooks is not None: 5141 for hook in optionsConfig.hooks: 5142 Config._addHook(xmlDom, sectionNode, hook)
    5143 5144 @staticmethod
    5145 - def _addPeers(xmlDom, parentNode, peersConfig):
    5146 """ 5147 Adds a <peers> configuration section as the next child of a parent. 5148 5149 We add groups of the following items, one list element per 5150 item:: 5151 5152 localPeers //cb_config/peers/peer 5153 remotePeers //cb_config/peers/peer 5154 5155 The individual local and remote peer entries are added by 5156 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5157 5158 If C{peersConfig} is C{None}, then no container will be added. 5159 5160 @param xmlDom: DOM tree as from L{createOutputDom}. 5161 @param parentNode: Parent that the section should be appended to. 5162 @param peersConfig: Peers configuration section to be added to the document. 5163 """ 5164 if peersConfig is not None: 5165 sectionNode = addContainerNode(xmlDom, parentNode, "peers") 5166 if peersConfig.localPeers is not None: 5167 for localPeer in peersConfig.localPeers: 5168 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5169 if peersConfig.remotePeers is not None: 5170 for remotePeer in peersConfig.remotePeers: 5171 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5172 5173 @staticmethod
    5174 - def _addCollect(xmlDom, parentNode, collectConfig):
    5175 """ 5176 Adds a <collect> configuration section as the next child of a parent. 5177 5178 We add the following fields to the document:: 5179 5180 targetDir //cb_config/collect/collect_dir 5181 collectMode //cb_config/collect/collect_mode 5182 archiveMode //cb_config/collect/archive_mode 5183 ignoreFile //cb_config/collect/ignore_file 5184 5185 We also add groups of the following items, one list element per 5186 item:: 5187 5188 absoluteExcludePaths //cb_config/collect/exclude/abs_path 5189 excludePatterns //cb_config/collect/exclude/pattern 5190 collectFiles //cb_config/collect/file 5191 collectDirs //cb_config/collect/dir 5192 5193 The individual collect files are added by L{_addCollectFile} and 5194 individual collect directories are added by L{_addCollectDir}. 5195 5196 If C{collectConfig} is C{None}, then no container will be added. 5197 5198 @param xmlDom: DOM tree as from L{createOutputDom}. 5199 @param parentNode: Parent that the section should be appended to. 5200 @param collectConfig: Collect configuration section to be added to the document. 5201 """ 5202 if collectConfig is not None: 5203 sectionNode = addContainerNode(xmlDom, parentNode, "collect") 5204 addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) 5205 addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) 5206 addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) 5207 addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) 5208 if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or 5209 (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): 5210 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5211 if collectConfig.absoluteExcludePaths is not None: 5212 for absolutePath in collectConfig.absoluteExcludePaths: 5213 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5214 if collectConfig.excludePatterns is not None: 5215 for pattern in collectConfig.excludePatterns: 5216 addStringNode(xmlDom, excludeNode, "pattern", pattern) 5217 if collectConfig.collectFiles is not None: 5218 for collectFile in collectConfig.collectFiles: 5219 Config._addCollectFile(xmlDom, sectionNode, collectFile) 5220 if collectConfig.collectDirs is not None: 5221 for collectDir in collectConfig.collectDirs: 5222 Config._addCollectDir(xmlDom, sectionNode, collectDir)
    5223 5224 @staticmethod
    5225 - def _addStage(xmlDom, parentNode, stageConfig):
    5226 """ 5227 Adds a <stage> configuration section as the next child of a parent. 5228 5229 We add the following fields to the document:: 5230 5231 targetDir //cb_config/stage/staging_dir 5232 5233 We also add groups of the following items, one list element per 5234 item:: 5235 5236 localPeers //cb_config/stage/peer 5237 remotePeers //cb_config/stage/peer 5238 5239 The individual local and remote peer entries are added by 5240 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5241 5242 If C{stageConfig} is C{None}, then no container will be added. 5243 5244 @param xmlDom: DOM tree as from L{createOutputDom}. 5245 @param parentNode: Parent that the section should be appended to. 5246 @param stageConfig: Stage configuration section to be added to the document. 5247 """ 5248 if stageConfig is not None: 5249 sectionNode = addContainerNode(xmlDom, parentNode, "stage") 5250 addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) 5251 if stageConfig.localPeers is not None: 5252 for localPeer in stageConfig.localPeers: 5253 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5254 if stageConfig.remotePeers is not None: 5255 for remotePeer in stageConfig.remotePeers: 5256 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5257 5258 @staticmethod
    5259 - def _addStore(xmlDom, parentNode, storeConfig):
    5260 """ 5261 Adds a <store> configuration section as the next child of a parent. 5262 5263 We add the following fields to the document:: 5264 5265 sourceDir //cb_config/store/source_dir 5266 mediaType //cb_config/store/media_type 5267 deviceType //cb_config/store/device_type 5268 devicePath //cb_config/store/target_device 5269 deviceScsiId //cb_config/store/target_scsi_id 5270 driveSpeed //cb_config/store/drive_speed 5271 checkData //cb_config/store/check_data 5272 checkMedia //cb_config/store/check_media 5273 warnMidnite //cb_config/store/warn_midnite 5274 noEject //cb_config/store/no_eject 5275 refreshMediaDelay //cb_config/store/refresh_media_delay 5276 ejectDelay //cb_config/store/eject_delay 5277 5278 Blanking behavior configuration is added by the L{_addBlankBehavior} 5279 method. 5280 5281 If C{storeConfig} is C{None}, then no container will be added. 5282 5283 @param xmlDom: DOM tree as from L{createOutputDom}. 5284 @param parentNode: Parent that the section should be appended to. 5285 @param storeConfig: Store configuration section to be added to the document. 5286 """ 5287 if storeConfig is not None: 5288 sectionNode = addContainerNode(xmlDom, parentNode, "store") 5289 addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) 5290 addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) 5291 addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) 5292 addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) 5293 addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) 5294 addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) 5295 addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) 5296 addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) 5297 addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) 5298 addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) 5299 addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) 5300 addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) 5301 Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior)
    5302 5303 @staticmethod
    5304 - def _addPurge(xmlDom, parentNode, purgeConfig):
    5305 """ 5306 Adds a <purge> configuration section as the next child of a parent. 5307 5308 We add the following fields to the document:: 5309 5310 purgeDirs //cb_config/purge/dir 5311 5312 The individual directory entries are added by L{_addPurgeDir}. 5313 5314 If C{purgeConfig} is C{None}, then no container will be added. 5315 5316 @param xmlDom: DOM tree as from L{createOutputDom}. 5317 @param parentNode: Parent that the section should be appended to. 5318 @param purgeConfig: Purge configuration section to be added to the document. 5319 """ 5320 if purgeConfig is not None: 5321 sectionNode = addContainerNode(xmlDom, parentNode, "purge") 5322 if purgeConfig.purgeDirs is not None: 5323 for purgeDir in purgeConfig.purgeDirs: 5324 Config._addPurgeDir(xmlDom, sectionNode, purgeDir)
    5325 5326 @staticmethod
    5327 - def _addExtendedAction(xmlDom, parentNode, action):
    5328 """ 5329 Adds an extended action container as the next child of a parent. 5330 5331 We add the following fields to the document:: 5332 5333 name action/name 5334 module action/module 5335 function action/function 5336 index action/index 5337 dependencies action/depends 5338 5339 Dependencies are added by the L{_addDependencies} method. 5340 5341 The <action> node itself is created as the next child of the parent node. 5342 This method only adds one action node. The parent must loop for each action 5343 in the C{ExtensionsConfig} object. 5344 5345 If C{action} is C{None}, this method call will be a no-op. 5346 5347 @param xmlDom: DOM tree as from L{createOutputDom}. 5348 @param parentNode: Parent that the section should be appended to. 5349 @param action: Purge directory to be added to the document. 5350 """ 5351 if action is not None: 5352 sectionNode = addContainerNode(xmlDom, parentNode, "action") 5353 addStringNode(xmlDom, sectionNode, "name", action.name) 5354 addStringNode(xmlDom, sectionNode, "module", action.module) 5355 addStringNode(xmlDom, sectionNode, "function", action.function) 5356 addIntegerNode(xmlDom, sectionNode, "index", action.index) 5357 Config._addDependencies(xmlDom, sectionNode, action.dependencies)
    5358 5359 @staticmethod
    5360 - def _addOverride(xmlDom, parentNode, override):
    5361 """ 5362 Adds a command override container as the next child of a parent. 5363 5364 We add the following fields to the document:: 5365 5366 command override/command 5367 absolutePath override/abs_path 5368 5369 The <override> node itself is created as the next child of the parent 5370 node. This method only adds one override node. The parent must loop for 5371 each override in the C{OptionsConfig} object. 5372 5373 If C{override} is C{None}, this method call will be a no-op. 5374 5375 @param xmlDom: DOM tree as from L{createOutputDom}. 5376 @param parentNode: Parent that the section should be appended to. 5377 @param override: Command override to be added to the document. 5378 """ 5379 if override is not None: 5380 sectionNode = addContainerNode(xmlDom, parentNode, "override") 5381 addStringNode(xmlDom, sectionNode, "command", override.command) 5382 addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath)
    5383 5384 @staticmethod
    5385 - def _addHook(xmlDom, parentNode, hook):
    5386 """ 5387 Adds an action hook container as the next child of a parent. 5388 5389 The behavior varies depending on the value of the C{before} and C{after} 5390 flags on the hook. If the C{before} flag is set, it's a pre-action hook, 5391 and we'll add the following fields:: 5392 5393 action pre_action_hook/action 5394 command pre_action_hook/command 5395 5396 If the C{after} flag is set, it's a post-action hook, and we'll add the 5397 following fields:: 5398 5399 action post_action_hook/action 5400 command post_action_hook/command 5401 5402 The <pre_action_hook> or <post_action_hook> node itself is created as the 5403 next child of the parent node. This method only adds one hook node. The 5404 parent must loop for each hook in the C{OptionsConfig} object. 5405 5406 If C{hook} is C{None}, this method call will be a no-op. 5407 5408 @param xmlDom: DOM tree as from L{createOutputDom}. 5409 @param parentNode: Parent that the section should be appended to. 5410 @param hook: Command hook to be added to the document. 5411 """ 5412 if hook is not None: 5413 if hook.before: 5414 sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") 5415 else: 5416 sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") 5417 addStringNode(xmlDom, sectionNode, "action", hook.action) 5418 addStringNode(xmlDom, sectionNode, "command", hook.command)
    5419 5420 @staticmethod
    5421 - def _addCollectFile(xmlDom, parentNode, collectFile):
    5422 """ 5423 Adds a collect file container as the next child of a parent. 5424 5425 We add the following fields to the document:: 5426 5427 absolutePath dir/abs_path 5428 collectMode dir/collect_mode 5429 archiveMode dir/archive_mode 5430 5431 Note that for consistency with collect directory handling we'll only emit 5432 the preferred C{collect_mode} tag. 5433 5434 The <file> node itself is created as the next child of the parent node. 5435 This method only adds one collect file node. The parent must loop 5436 for each collect file in the C{CollectConfig} object. 5437 5438 If C{collectFile} is C{None}, this method call will be a no-op. 5439 5440 @param xmlDom: DOM tree as from L{createOutputDom}. 5441 @param parentNode: Parent that the section should be appended to. 5442 @param collectFile: Collect file to be added to the document. 5443 """ 5444 if collectFile is not None: 5445 sectionNode = addContainerNode(xmlDom, parentNode, "file") 5446 addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) 5447 addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) 5448 addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode)
    5449 5450 @staticmethod
    5451 - def _addCollectDir(xmlDom, parentNode, collectDir):
    5452 """ 5453 Adds a collect directory container as the next child of a parent. 5454 5455 We add the following fields to the document:: 5456 5457 absolutePath dir/abs_path 5458 collectMode dir/collect_mode 5459 archiveMode dir/archive_mode 5460 ignoreFile dir/ignore_file 5461 linkDepth dir/link_depth 5462 dereference dir/dereference 5463 recursionLevel dir/recursion_level 5464 5465 Note that an original XML document might have listed the collect mode 5466 using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. 5467 However, here we'll only emit the preferred C{collect_mode} tag. 5468 5469 We also add groups of the following items, one list element per item:: 5470 5471 absoluteExcludePaths dir/exclude/abs_path 5472 relativeExcludePaths dir/exclude/rel_path 5473 excludePatterns dir/exclude/pattern 5474 5475 The <dir> node itself is created as the next child of the parent node. 5476 This method only adds one collect directory node. The parent must loop 5477 for each collect directory in the C{CollectConfig} object. 5478 5479 If C{collectDir} is C{None}, this method call will be a no-op. 5480 5481 @param xmlDom: DOM tree as from L{createOutputDom}. 5482 @param parentNode: Parent that the section should be appended to. 5483 @param collectDir: Collect directory to be added to the document. 5484 """ 5485 if collectDir is not None: 5486 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5487 addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) 5488 addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) 5489 addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) 5490 addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) 5491 addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) 5492 addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) 5493 addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) 5494 if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or 5495 (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or 5496 (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): 5497 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5498 if collectDir.absoluteExcludePaths is not None: 5499 for absolutePath in collectDir.absoluteExcludePaths: 5500 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5501 if collectDir.relativeExcludePaths is not None: 5502 for relativePath in collectDir.relativeExcludePaths: 5503 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 5504 if collectDir.excludePatterns is not None: 5505 for pattern in collectDir.excludePatterns: 5506 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    5507 5508 @staticmethod
    5509 - def _addLocalPeer(xmlDom, parentNode, localPeer):
    5510 """ 5511 Adds a local peer container as the next child of a parent. 5512 5513 We add the following fields to the document:: 5514 5515 name peer/name 5516 collectDir peer/collect_dir 5517 ignoreFailureMode peer/ignore_failures 5518 5519 Additionally, C{peer/type} is filled in with C{"local"}, since this is a 5520 local peer. 5521 5522 The <peer> node itself is created as the next child of the parent node. 5523 This method only adds one peer node. The parent must loop for each peer 5524 in the C{StageConfig} object. 5525 5526 If C{localPeer} is C{None}, this method call will be a no-op. 5527 5528 @param xmlDom: DOM tree as from L{createOutputDom}. 5529 @param parentNode: Parent that the section should be appended to. 5530 @param localPeer: Purge directory to be added to the document. 5531 """ 5532 if localPeer is not None: 5533 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5534 addStringNode(xmlDom, sectionNode, "name", localPeer.name) 5535 addStringNode(xmlDom, sectionNode, "type", "local") 5536 addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) 5537 addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode)
    5538 5539 @staticmethod
    5540 - def _addRemotePeer(xmlDom, parentNode, remotePeer):
    5541 """ 5542 Adds a remote peer container as the next child of a parent. 5543 5544 We add the following fields to the document:: 5545 5546 name peer/name 5547 collectDir peer/collect_dir 5548 remoteUser peer/backup_user 5549 rcpCommand peer/rcp_command 5550 rcpCommand peer/rcp_command 5551 rshCommand peer/rsh_command 5552 cbackCommand peer/cback_command 5553 ignoreFailureMode peer/ignore_failures 5554 managed peer/managed 5555 managedActions peer/managed_actions 5556 5557 Additionally, C{peer/type} is filled in with C{"remote"}, since this is a 5558 remote peer. 5559 5560 The <peer> node itself is created as the next child of the parent node. 5561 This method only adds one peer node. The parent must loop for each peer 5562 in the C{StageConfig} object. 5563 5564 If C{remotePeer} is C{None}, this method call will be a no-op. 5565 5566 @param xmlDom: DOM tree as from L{createOutputDom}. 5567 @param parentNode: Parent that the section should be appended to. 5568 @param remotePeer: Purge directory to be added to the document. 5569 """ 5570 if remotePeer is not None: 5571 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5572 addStringNode(xmlDom, sectionNode, "name", remotePeer.name) 5573 addStringNode(xmlDom, sectionNode, "type", "remote") 5574 addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) 5575 addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) 5576 addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) 5577 addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) 5578 addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) 5579 addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) 5580 addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) 5581 managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) 5582 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions)
    5583 5584 @staticmethod
    5585 - def _addPurgeDir(xmlDom, parentNode, purgeDir):
    5586 """ 5587 Adds a purge directory container as the next child of a parent. 5588 5589 We add the following fields to the document:: 5590 5591 absolutePath dir/abs_path 5592 retainDays dir/retain_days 5593 5594 The <dir> node itself is created as the next child of the parent node. 5595 This method only adds one purge directory node. The parent must loop for 5596 each purge directory in the C{PurgeConfig} object. 5597 5598 If C{purgeDir} is C{None}, this method call will be a no-op. 5599 5600 @param xmlDom: DOM tree as from L{createOutputDom}. 5601 @param parentNode: Parent that the section should be appended to. 5602 @param purgeDir: Purge directory to be added to the document. 5603 """ 5604 if purgeDir is not None: 5605 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5606 addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) 5607 addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays)
    5608 5609 @staticmethod
    5610 - def _addDependencies(xmlDom, parentNode, dependencies):
    5611 """ 5612 Adds a extended action dependencies to parent node. 5613 5614 We add the following fields to the document:: 5615 5616 runBefore depends/run_before 5617 runAfter depends/run_after 5618 5619 If C{dependencies} is C{None}, this method call will be a no-op. 5620 5621 @param xmlDom: DOM tree as from L{createOutputDom}. 5622 @param parentNode: Parent that the section should be appended to. 5623 @param dependencies: C{ActionDependencies} object to be added to the document 5624 """ 5625 if dependencies is not None: 5626 sectionNode = addContainerNode(xmlDom, parentNode, "depends") 5627 runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) 5628 runAfter = Config._buildCommaSeparatedString(dependencies.afterList) 5629 addStringNode(xmlDom, sectionNode, "run_before", runBefore) 5630 addStringNode(xmlDom, sectionNode, "run_after", runAfter)
    5631 5632 @staticmethod
    5633 - def _buildCommaSeparatedString(valueList):
    5634 """ 5635 Creates a comma-separated string from a list of values. 5636 5637 As a special case, if C{valueList} is C{None}, then C{None} will be 5638 returned. 5639 5640 @param valueList: List of values to be placed into a string 5641 5642 @return: Values from valueList as a comma-separated string. 5643 """ 5644 if valueList is None: 5645 return None 5646 return ",".join(valueList)
    5647 5648 @staticmethod
    5649 - def _addBlankBehavior(xmlDom, parentNode, blankBehavior):
    5650 """ 5651 Adds a blanking behavior container as the next child of a parent. 5652 5653 We add the following fields to the document:: 5654 5655 blankMode blank_behavior/mode 5656 blankFactor blank_behavior/factor 5657 5658 The <blank_behavior> node itself is created as the next child of the 5659 parent node. 5660 5661 If C{blankBehavior} is C{None}, this method call will be a no-op. 5662 5663 @param xmlDom: DOM tree as from L{createOutputDom}. 5664 @param parentNode: Parent that the section should be appended to. 5665 @param blankBehavior: Blanking behavior to be added to the document. 5666 """ 5667 if blankBehavior is not None: 5668 sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") 5669 addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) 5670 addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor)
    5671 5672 5673 ################################################# 5674 # High-level methods used for validating content 5675 ################################################# 5676
    5677 - def _validateContents(self):
    5678 """ 5679 Validates configuration contents per rules discussed in module 5680 documentation. 5681 5682 This is the second pass at validation. It ensures that any filled-in 5683 section contains valid data. Any sections which is not set to C{None} is 5684 validated per the rules for that section, laid out in the module 5685 documentation (above). 5686 5687 @raise ValueError: If configuration is invalid. 5688 """ 5689 self._validateReference() 5690 self._validateExtensions() 5691 self._validateOptions() 5692 self._validatePeers() 5693 self._validateCollect() 5694 self._validateStage() 5695 self._validateStore() 5696 self._validatePurge()
    5697
    5698 - def _validateReference(self):
    5699 """ 5700 Validates reference configuration. 5701 There are currently no reference-related validations. 5702 @raise ValueError: If reference configuration is invalid. 5703 """ 5704 pass
    5705
    5706 - def _validateExtensions(self):
    5707 """ 5708 Validates extensions configuration. 5709 5710 The list of actions may be either C{None} or an empty list C{[]} if 5711 desired. Each extended action must include a name, a module, and a 5712 function. 5713 5714 Then, if the order mode is None or "index", an index is required; and if 5715 the order mode is "dependency", dependency information is required. 5716 5717 @raise ValueError: If reference configuration is invalid. 5718 """ 5719 if self.extensions is not None: 5720 if self.extensions.actions is not None: 5721 names = [] 5722 for action in self.extensions.actions: 5723 if action.name is None: 5724 raise ValueError("Each extended action must set a name.") 5725 names.append(action.name) 5726 if action.module is None: 5727 raise ValueError("Each extended action must set a module.") 5728 if action.function is None: 5729 raise ValueError("Each extended action must set a function.") 5730 if self.extensions.orderMode is None or self.extensions.orderMode == "index": 5731 if action.index is None: 5732 raise ValueError("Each extended action must set an index, based on order mode.") 5733 elif self.extensions.orderMode == "dependency": 5734 if action.dependencies is None: 5735 raise ValueError("Each extended action must set dependency information, based on order mode.") 5736 checkUnique("Duplicate extension names exist:", names)
    5737
    5738 - def _validateOptions(self):
    5739 """ 5740 Validates options configuration. 5741 5742 All fields must be filled in except the rsh command. The rcp and rsh 5743 commands are used as default values for all remote peers. Remote peers 5744 can also rely on the backup user as the default remote user name if they 5745 choose. 5746 5747 @raise ValueError: If reference configuration is invalid. 5748 """ 5749 if self.options is not None: 5750 if self.options.startingDay is None: 5751 raise ValueError("Options section starting day must be filled in.") 5752 if self.options.workingDir is None: 5753 raise ValueError("Options section working directory must be filled in.") 5754 if self.options.backupUser is None: 5755 raise ValueError("Options section backup user must be filled in.") 5756 if self.options.backupGroup is None: 5757 raise ValueError("Options section backup group must be filled in.") 5758 if self.options.rcpCommand is None: 5759 raise ValueError("Options section remote copy command must be filled in.")
    5760
    5761 - def _validatePeers(self):
    5762 """ 5763 Validates peers configuration per rules in L{_validatePeerList}. 5764 @raise ValueError: If peers configuration is invalid. 5765 """ 5766 if self.peers is not None: 5767 self._validatePeerList(self.peers.localPeers, self.peers.remotePeers)
    5768
    5769 - def _validateCollect(self):
    5770 """ 5771 Validates collect configuration. 5772 5773 The target directory must be filled in. The collect mode, archive mode, 5774 ignore file, and recursion level are all optional. The list of absolute 5775 paths to exclude and patterns to exclude may be either C{None} or an 5776 empty list C{[]} if desired. 5777 5778 Each collect directory entry must contain an absolute path to collect, 5779 and then must either be able to take collect mode, archive mode and 5780 ignore file configuration from the parent C{CollectConfig} object, or 5781 must set each value on its own. The list of absolute paths to exclude, 5782 relative paths to exclude and patterns to exclude may be either C{None} 5783 or an empty list C{[]} if desired. Any list of absolute paths to exclude 5784 or patterns to exclude will be combined with the same list in the 5785 C{CollectConfig} object to make the complete list for a given directory. 5786 5787 @raise ValueError: If collect configuration is invalid. 5788 """ 5789 if self.collect is not None: 5790 if self.collect.targetDir is None: 5791 raise ValueError("Collect section target directory must be filled in.") 5792 if self.collect.collectFiles is not None: 5793 for collectFile in self.collect.collectFiles: 5794 if collectFile.absolutePath is None: 5795 raise ValueError("Each collect file must set an absolute path.") 5796 if self.collect.collectMode is None and collectFile.collectMode is None: 5797 raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") 5798 if self.collect.archiveMode is None and collectFile.archiveMode is None: 5799 raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") 5800 if self.collect.collectDirs is not None: 5801 for collectDir in self.collect.collectDirs: 5802 if collectDir.absolutePath is None: 5803 raise ValueError("Each collect directory must set an absolute path.") 5804 if self.collect.collectMode is None and collectDir.collectMode is None: 5805 raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") 5806 if self.collect.archiveMode is None and collectDir.archiveMode is None: 5807 raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") 5808 if self.collect.ignoreFile is None and collectDir.ignoreFile is None: 5809 raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") 5810 if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: 5811 raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.")
    5812
    5813 - def _validateStage(self):
    5814 """ 5815 Validates stage configuration. 5816 5817 The target directory must be filled in, and the peers are 5818 also validated. 5819 5820 Peers are only required in this section if the peers configuration 5821 section is not filled in. However, if any peers are filled in 5822 here, they override the peers configuration and must meet the 5823 validation criteria in L{_validatePeerList}. 5824 5825 @raise ValueError: If stage configuration is invalid. 5826 """ 5827 if self.stage is not None: 5828 if self.stage.targetDir is None: 5829 raise ValueError("Stage section target directory must be filled in.") 5830 if self.peers is None: 5831 # In this case, stage configuration is our only configuration and must be valid. 5832 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) 5833 else: 5834 # In this case, peers configuration is the default and stage configuration overrides. 5835 # Validation is only needed if it's stage configuration is actually filled in. 5836 if self.stage.hasPeers(): 5837 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers)
    5838
    5839 - def _validateStore(self):
    5840 """ 5841 Validates store configuration. 5842 5843 The device type, drive speed, and blanking behavior are optional. All 5844 other values are required. Missing booleans will be set to defaults. 5845 5846 If blanking behavior is provided, then both a blanking mode and a 5847 blanking factor are required. 5848 5849 The image writer functionality in the C{writer} module is supposed to be 5850 able to handle a device speed of C{None}. 5851 5852 Any caller which needs a "real" (non-C{None}) value for the device type 5853 can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. 5854 5855 This is also where we make sure that the media type -- which is already a 5856 valid type -- matches up properly with the device type. 5857 5858 @raise ValueError: If store configuration is invalid. 5859 """ 5860 if self.store is not None: 5861 if self.store.sourceDir is None: 5862 raise ValueError("Store section source directory must be filled in.") 5863 if self.store.mediaType is None: 5864 raise ValueError("Store section media type must be filled in.") 5865 if self.store.devicePath is None: 5866 raise ValueError("Store section device path must be filled in.") 5867 if self.store.deviceType == None or self.store.deviceType == "cdwriter": 5868 if self.store.mediaType not in VALID_CD_MEDIA_TYPES: 5869 raise ValueError("Media type must match device type.") 5870 elif self.store.deviceType == "dvdwriter": 5871 if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: 5872 raise ValueError("Media type must match device type.") 5873 if self.store.blankBehavior is not None: 5874 if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: 5875 raise ValueError("If blanking behavior is provided, all values must be filled in.")
    5876
    5877 - def _validatePurge(self):
    5878 """ 5879 Validates purge configuration. 5880 5881 The list of purge directories may be either C{None} or an empty list 5882 C{[]} if desired. All purge directories must contain a path and a retain 5883 days value. 5884 5885 @raise ValueError: If purge configuration is invalid. 5886 """ 5887 if self.purge is not None: 5888 if self.purge.purgeDirs is not None: 5889 for purgeDir in self.purge.purgeDirs: 5890 if purgeDir.absolutePath is None: 5891 raise ValueError("Each purge directory must set an absolute path.") 5892 if purgeDir.retainDays is None: 5893 raise ValueError("Each purge directory must set a retain days value.")
    5894
    5895 - def _validatePeerList(self, localPeers, remotePeers):
    5896 """ 5897 Validates the set of local and remote peers. 5898 5899 Local peers must be completely filled in, including both name and collect 5900 directory. Remote peers must also fill in the name and collect 5901 directory, but can leave the remote user and rcp command unset. In this 5902 case, the remote user is assumed to match the backup user from the 5903 options section and rcp command is taken directly from the options 5904 section. 5905 5906 @param localPeers: List of local peers 5907 @param remotePeers: List of remote peers 5908 5909 @raise ValueError: If stage configuration is invalid. 5910 """ 5911 if localPeers is None and remotePeers is None: 5912 raise ValueError("Peer list must contain at least one backup peer.") 5913 if localPeers is None and remotePeers is not None: 5914 if len(remotePeers) < 1: 5915 raise ValueError("Peer list must contain at least one backup peer.") 5916 elif localPeers is not None and remotePeers is None: 5917 if len(localPeers) < 1: 5918 raise ValueError("Peer list must contain at least one backup peer.") 5919 elif localPeers is not None and remotePeers is not None: 5920 if len(localPeers) + len(remotePeers) < 1: 5921 raise ValueError("Peer list must contain at least one backup peer.") 5922 names = [] 5923 if localPeers is not None: 5924 for localPeer in localPeers: 5925 if localPeer.name is None: 5926 raise ValueError("Local peers must set a name.") 5927 names.append(localPeer.name) 5928 if localPeer.collectDir is None: 5929 raise ValueError("Local peers must set a collect directory.") 5930 if remotePeers is not None: 5931 for remotePeer in remotePeers: 5932 if remotePeer.name is None: 5933 raise ValueError("Remote peers must set a name.") 5934 names.append(remotePeer.name) 5935 if remotePeer.collectDir is None: 5936 raise ValueError("Remote peers must set a collect directory.") 5937 if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: 5938 raise ValueError("Remote user must either be set in options section or individual remote peer.") 5939 if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: 5940 raise ValueError("Remote copy command must either be set in options section or individual remote peer.") 5941 if remotePeer.managed: 5942 if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: 5943 raise ValueError("Remote shell command must either be set in options section or individual remote peer.") 5944 if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: 5945 raise ValueError("Remote cback command must either be set in options section or individual remote peer.") 5946 if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) 5947 and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): 5948 raise ValueError("Managed actions list must be set in options section or individual remote peer.") 5949 checkUnique("Duplicate peer names exist:", names)
    5950
    5951 5952 ######################################################################## 5953 # General utility functions 5954 ######################################################################## 5955 5956 -def readByteQuantity(parent, name):
    5957 """ 5958 Read a byte size value from an XML document. 5959 5960 A byte size value is an interpreted string value. If the string value 5961 ends with "MB" or "GB", then the string before that is interpreted as 5962 megabytes or gigabytes. Otherwise, it is intepreted as bytes. 5963 5964 @param parent: Parent node to search beneath. 5965 @param name: Name of node to search for. 5966 5967 @return: ByteQuantity parsed from XML document 5968 """ 5969 data = readString(parent, name) 5970 if data is None: 5971 return None 5972 data = data.strip() 5973 if data.endswith("KB"): 5974 quantity = data[0:data.rfind("KB")].strip() 5975 units = UNIT_KBYTES 5976 elif data.endswith("MB"): 5977 quantity = data[0:data.rfind("MB")].strip() 5978 units = UNIT_MBYTES 5979 elif data.endswith("GB"): 5980 quantity = data[0:data.rfind("GB")].strip() 5981 units = UNIT_GBYTES 5982 else: 5983 quantity = data.strip() 5984 units = UNIT_BYTES 5985 return ByteQuantity(quantity, units)
    5986
    5987 -def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity):
    5988 """ 5989 Adds a text node as the next child of a parent, to contain a byte size. 5990 5991 If the C{byteQuantity} is None, then the node will be created, but will 5992 be empty (i.e. will contain no text node child). 5993 5994 The size in bytes will be normalized. If it is larger than 1.0 GB, it will 5995 be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will 5996 be shown in MB. Otherwise, it will be shown in bytes ("423413"). 5997 5998 @param xmlDom: DOM tree as from C{impl.createDocument()}. 5999 @param parentNode: Parent node to create child for. 6000 @param nodeName: Name of the new container node. 6001 @param byteQuantity: ByteQuantity object to put into the XML document 6002 6003 @return: Reference to the newly-created node. 6004 """ 6005 if byteQuantity is None: 6006 byteString = None 6007 elif byteQuantity.units == UNIT_KBYTES: 6008 byteString = "%s KB" % byteQuantity.quantity 6009 elif byteQuantity.units == UNIT_MBYTES: 6010 byteString = "%s MB" % byteQuantity.quantity 6011 elif byteQuantity.units == UNIT_GBYTES: 6012 byteString = "%s GB" % byteQuantity.quantity 6013 else: 6014 byteString = byteQuantity.quantity 6015 return addStringNode(xmlDom, parentNode, nodeName, byteString)
    6016

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox-pysrc.html0000664000175000017500000205161012143054365026766 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox
    Package CedarBackup2 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.mbox

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2006-2007,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Revision : $Id: mbox.py 1006 2010-07-07 21:03:57Z pronovic $ 
      31  # Purpose  : Provides an extension to back up mbox email files. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides an extension to back up mbox email files. 
      41   
      42  Backing up email 
      43  ================ 
      44   
      45     Email folders (often stored as mbox flatfiles) are not well-suited being backed 
      46     up with an incremental backup like the one offered by Cedar Backup.  This is 
      47     because mbox files often change on a daily basis, forcing the incremental 
      48     backup process to back them up every day in order to avoid losing data.  This 
      49     can result in quite a bit of wasted space when backing up large folders.  (Note 
      50     that the alternative maildir format does not share this problem, since it 
      51     typically uses one file per message.) 
      52   
      53     One solution to this problem is to design a smarter incremental backup process, 
      54     which backs up baseline content on the first day of the week, and then backs up 
      55     only new messages added to that folder on every other day of the week.  This way, 
      56     the backup for any single day is only as large as the messages placed into the  
      57     folder on that day.  The backup isn't as "perfect" as the incremental backup 
      58     process, because it doesn't preserve information about messages deleted from 
      59     the backed-up folder.  However, it should be much more space-efficient, and 
      60     in a recovery situation, it seems better to restore too much data rather 
      61     than too little. 
      62   
      63  What is this extension? 
      64  ======================= 
      65   
      66     This is a Cedar Backup extension used to back up mbox email files via the Cedar 
      67     Backup command line.  Individual mbox files or directories containing mbox 
      68     files can be backed up using the same collect modes allowed for filesystems in 
      69     the standard Cedar Backup collect action: weekly, daily, incremental.  It  
      70     implements the "smart" incremental backup process discussed above, using  
      71     functionality provided by the C{grepmail} utility. 
      72   
      73     This extension requires a new configuration section <mbox> and is intended to 
      74     be run either immediately before or immediately after the standard collect 
      75     action.  Aside from its own configuration, it requires the options and collect 
      76     configuration sections in the standard Cedar Backup configuration file. 
      77   
      78     The mbox action is conceptually similar to the standard collect action, 
      79     except that mbox directories are not collected recursively.  This implies 
      80     some configuration changes (i.e. there's no need for global exclusions or an 
      81     ignore file).  If you back up a directory, all of the mbox files in that 
      82     directory are backed up into a single tar file using the indicated 
      83     compression method. 
      84   
      85  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      86  """ 
      87   
      88  ######################################################################## 
      89  # Imported modules 
      90  ######################################################################## 
      91   
      92  # System modules 
      93  import os 
      94  import logging 
      95  import datetime 
      96  import pickle 
      97  import tempfile 
      98  from bz2 import BZ2File 
      99  from gzip import GzipFile 
     100   
     101  # Cedar Backup modules 
     102  from CedarBackup2.filesystem import FilesystemList, BackupFileList 
     103  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     104  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
     105  from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
     106  from CedarBackup2.util import isStartOfWeek, buildNormalizedPath 
     107  from CedarBackup2.util import resolveCommand, executeCommand 
     108  from CedarBackup2.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership 
     109   
     110   
     111  ######################################################################## 
     112  # Module-wide constants and variables 
     113  ######################################################################## 
     114   
     115  logger = logging.getLogger("CedarBackup2.log.extend.mbox") 
     116   
     117  GREPMAIL_COMMAND = [ "grepmail", ] 
     118  REVISION_PATH_EXTENSION = "mboxlast" 
    
    119 120 121 ######################################################################## 122 # MboxFile class definition 123 ######################################################################## 124 125 -class MboxFile(object):
    126 127 """ 128 Class representing mbox file configuration.. 129 130 The following restrictions exist on data in this class: 131 132 - The absolute path must be absolute. 133 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 134 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 135 136 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode 137 """ 138
    139 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None):
    140 """ 141 Constructor for the C{MboxFile} class. 142 143 You should never directly instantiate this class. 144 145 @param absolutePath: Absolute path to an mbox file on disk. 146 @param collectMode: Overridden collect mode for this directory. 147 @param compressMode: Overridden compression mode for this directory. 148 """ 149 self._absolutePath = None 150 self._collectMode = None 151 self._compressMode = None 152 self.absolutePath = absolutePath 153 self.collectMode = collectMode 154 self.compressMode = compressMode
    155
    156 - def __repr__(self):
    157 """ 158 Official string representation for class instance. 159 """ 160 return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode)
    161
    162 - def __str__(self):
    163 """ 164 Informal string representation for class instance. 165 """ 166 return self.__repr__()
    167
    168 - def __cmp__(self, other):
    169 """ 170 Definition of equals operator for this class. 171 @param other: Other object to compare to. 172 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 173 """ 174 if other is None: 175 return 1 176 if self.absolutePath != other.absolutePath: 177 if self.absolutePath < other.absolutePath: 178 return -1 179 else: 180 return 1 181 if self.collectMode != other.collectMode: 182 if self.collectMode < other.collectMode: 183 return -1 184 else: 185 return 1 186 if self.compressMode != other.compressMode: 187 if self.compressMode < other.compressMode: 188 return -1 189 else: 190 return 1 191 return 0
    192
    193 - def _setAbsolutePath(self, value):
    194 """ 195 Property target used to set the absolute path. 196 The value must be an absolute path if it is not C{None}. 197 It does not have to exist on disk at the time of assignment. 198 @raise ValueError: If the value is not an absolute path. 199 @raise ValueError: If the value cannot be encoded properly. 200 """ 201 if value is not None: 202 if not os.path.isabs(value): 203 raise ValueError("Absolute path must be, er, an absolute path.") 204 self._absolutePath = encodePath(value)
    205
    206 - def _getAbsolutePath(self):
    207 """ 208 Property target used to get the absolute path. 209 """ 210 return self._absolutePath
    211
    212 - def _setCollectMode(self, value):
    213 """ 214 Property target used to set the collect mode. 215 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 216 @raise ValueError: If the value is not valid. 217 """ 218 if value is not None: 219 if value not in VALID_COLLECT_MODES: 220 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 221 self._collectMode = value
    222
    223 - def _getCollectMode(self):
    224 """ 225 Property target used to get the collect mode. 226 """ 227 return self._collectMode
    228
    229 - def _setCompressMode(self, value):
    230 """ 231 Property target used to set the compress mode. 232 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 233 @raise ValueError: If the value is not valid. 234 """ 235 if value is not None: 236 if value not in VALID_COMPRESS_MODES: 237 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 238 self._compressMode = value
    239
    240 - def _getCompressMode(self):
    241 """ 242 Property target used to get the compress mode. 243 """ 244 return self._compressMode
    245 246 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") 247 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") 248 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.")
    249
    250 251 ######################################################################## 252 # MboxDir class definition 253 ######################################################################## 254 255 -class MboxDir(object):
    256 257 """ 258 Class representing mbox directory configuration.. 259 260 The following restrictions exist on data in this class: 261 262 - The absolute path must be absolute. 263 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 264 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 265 266 Unlike collect directory configuration, this is the only place exclusions 267 are allowed (no global exclusions at the <mbox> configuration level). Also, 268 we only allow relative exclusions and there is no configured ignore file. 269 This is because mbox directory backups are not recursive. 270 271 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, 272 compressMode, relativeExcludePaths, excludePatterns 273 """ 274
    275 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None, 276 relativeExcludePaths=None, excludePatterns=None):
    277 """ 278 Constructor for the C{MboxDir} class. 279 280 You should never directly instantiate this class. 281 282 @param absolutePath: Absolute path to a mbox file on disk. 283 @param collectMode: Overridden collect mode for this directory. 284 @param compressMode: Overridden compression mode for this directory. 285 @param relativeExcludePaths: List of relative paths to exclude. 286 @param excludePatterns: List of regular expression patterns to exclude 287 """ 288 self._absolutePath = None 289 self._collectMode = None 290 self._compressMode = None 291 self._relativeExcludePaths = None 292 self._excludePatterns = None 293 self.absolutePath = absolutePath 294 self.collectMode = collectMode 295 self.compressMode = compressMode 296 self.relativeExcludePaths = relativeExcludePaths 297 self.excludePatterns = excludePatterns
    298
    299 - def __repr__(self):
    300 """ 301 Official string representation for class instance. 302 """ 303 return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, 304 self.relativeExcludePaths, self.excludePatterns)
    305
    306 - def __str__(self):
    307 """ 308 Informal string representation for class instance. 309 """ 310 return self.__repr__()
    311
    312 - def __cmp__(self, other):
    313 """ 314 Definition of equals operator for this class. 315 @param other: Other object to compare to. 316 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 317 """ 318 if other is None: 319 return 1 320 if self.absolutePath != other.absolutePath: 321 if self.absolutePath < other.absolutePath: 322 return -1 323 else: 324 return 1 325 if self.collectMode != other.collectMode: 326 if self.collectMode < other.collectMode: 327 return -1 328 else: 329 return 1 330 if self.compressMode != other.compressMode: 331 if self.compressMode < other.compressMode: 332 return -1 333 else: 334 return 1 335 if self.relativeExcludePaths != other.relativeExcludePaths: 336 if self.relativeExcludePaths < other.relativeExcludePaths: 337 return -1 338 else: 339 return 1 340 if self.excludePatterns != other.excludePatterns: 341 if self.excludePatterns < other.excludePatterns: 342 return -1 343 else: 344 return 1 345 return 0
    346
    347 - def _setAbsolutePath(self, value):
    348 """ 349 Property target used to set the absolute path. 350 The value must be an absolute path if it is not C{None}. 351 It does not have to exist on disk at the time of assignment. 352 @raise ValueError: If the value is not an absolute path. 353 @raise ValueError: If the value cannot be encoded properly. 354 """ 355 if value is not None: 356 if not os.path.isabs(value): 357 raise ValueError("Absolute path must be, er, an absolute path.") 358 self._absolutePath = encodePath(value)
    359
    360 - def _getAbsolutePath(self):
    361 """ 362 Property target used to get the absolute path. 363 """ 364 return self._absolutePath
    365
    366 - def _setCollectMode(self, value):
    367 """ 368 Property target used to set the collect mode. 369 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 370 @raise ValueError: If the value is not valid. 371 """ 372 if value is not None: 373 if value not in VALID_COLLECT_MODES: 374 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 375 self._collectMode = value
    376
    377 - def _getCollectMode(self):
    378 """ 379 Property target used to get the collect mode. 380 """ 381 return self._collectMode
    382
    383 - def _setCompressMode(self, value):
    384 """ 385 Property target used to set the compress mode. 386 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 387 @raise ValueError: If the value is not valid. 388 """ 389 if value is not None: 390 if value not in VALID_COMPRESS_MODES: 391 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 392 self._compressMode = value
    393
    394 - def _getCompressMode(self):
    395 """ 396 Property target used to get the compress mode. 397 """ 398 return self._compressMode
    399
    400 - def _setRelativeExcludePaths(self, value):
    401 """ 402 Property target used to set the relative exclude paths list. 403 Elements do not have to exist on disk at the time of assignment. 404 """ 405 if value is None: 406 self._relativeExcludePaths = None 407 else: 408 try: 409 saved = self._relativeExcludePaths 410 self._relativeExcludePaths = UnorderedList() 411 self._relativeExcludePaths.extend(value) 412 except Exception, e: 413 self._relativeExcludePaths = saved 414 raise e
    415
    416 - def _getRelativeExcludePaths(self):
    417 """ 418 Property target used to get the relative exclude paths list. 419 """ 420 return self._relativeExcludePaths
    421
    422 - def _setExcludePatterns(self, value):
    423 """ 424 Property target used to set the exclude patterns list. 425 """ 426 if value is None: 427 self._excludePatterns = None 428 else: 429 try: 430 saved = self._excludePatterns 431 self._excludePatterns = RegexList() 432 self._excludePatterns.extend(value) 433 except Exception, e: 434 self._excludePatterns = saved 435 raise e
    436
    437 - def _getExcludePatterns(self):
    438 """ 439 Property target used to get the exclude patterns list. 440 """ 441 return self._excludePatterns
    442 443 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") 444 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") 445 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") 446 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 447 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    448
    449 450 ######################################################################## 451 # MboxConfig class definition 452 ######################################################################## 453 454 -class MboxConfig(object):
    455 456 """ 457 Class representing mbox configuration. 458 459 Mbox configuration is used for backing up mbox email files. 460 461 The following restrictions exist on data in this class: 462 463 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 464 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 465 - The C{mboxFiles} list must be a list of C{MboxFile} objects 466 - The C{mboxDirs} list must be a list of C{MboxDir} objects 467 468 For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished 469 through the L{util.ObjectTypeList} list implementation that overrides common 470 list methods and transparently ensures that each element is of the proper 471 type. 472 473 Unlike collect configuration, no global exclusions are allowed on this 474 level. We only allow relative exclusions at the mbox directory level. 475 Also, there is no configured ignore file. This is because mbox directory 476 backups are not recursive. 477 478 @note: Lists within this class are "unordered" for equality comparisons. 479 480 @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, mboxFiles, mboxDirs 481 """ 482
    483 - def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None):
    484 """ 485 Constructor for the C{MboxConfig} class. 486 487 @param collectMode: Default collect mode. 488 @param compressMode: Default compress mode. 489 @param mboxFiles: List of mbox files to back up 490 @param mboxDirs: List of mbox directories to back up 491 492 @raise ValueError: If one of the values is invalid. 493 """ 494 self._collectMode = None 495 self._compressMode = None 496 self._mboxFiles = None 497 self._mboxDirs = None 498 self.collectMode = collectMode 499 self.compressMode = compressMode 500 self.mboxFiles = mboxFiles 501 self.mboxDirs = mboxDirs
    502
    503 - def __repr__(self):
    504 """ 505 Official string representation for class instance. 506 """ 507 return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs)
    508
    509 - def __str__(self):
    510 """ 511 Informal string representation for class instance. 512 """ 513 return self.__repr__()
    514
    515 - def __cmp__(self, other):
    516 """ 517 Definition of equals operator for this class. 518 Lists within this class are "unordered" for equality comparisons. 519 @param other: Other object to compare to. 520 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 521 """ 522 if other is None: 523 return 1 524 if self.collectMode != other.collectMode: 525 if self.collectMode < other.collectMode: 526 return -1 527 else: 528 return 1 529 if self.compressMode != other.compressMode: 530 if self.compressMode < other.compressMode: 531 return -1 532 else: 533 return 1 534 if self.mboxFiles != other.mboxFiles: 535 if self.mboxFiles < other.mboxFiles: 536 return -1 537 else: 538 return 1 539 if self.mboxDirs != other.mboxDirs: 540 if self.mboxDirs < other.mboxDirs: 541 return -1 542 else: 543 return 1 544 return 0
    545
    546 - def _setCollectMode(self, value):
    547 """ 548 Property target used to set the collect mode. 549 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 550 @raise ValueError: If the value is not valid. 551 """ 552 if value is not None: 553 if value not in VALID_COLLECT_MODES: 554 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 555 self._collectMode = value
    556
    557 - def _getCollectMode(self):
    558 """ 559 Property target used to get the collect mode. 560 """ 561 return self._collectMode
    562
    563 - def _setCompressMode(self, value):
    564 """ 565 Property target used to set the compress mode. 566 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 567 @raise ValueError: If the value is not valid. 568 """ 569 if value is not None: 570 if value not in VALID_COMPRESS_MODES: 571 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 572 self._compressMode = value
    573
    574 - def _getCompressMode(self):
    575 """ 576 Property target used to get the compress mode. 577 """ 578 return self._compressMode
    579
    580 - def _setMboxFiles(self, value):
    581 """ 582 Property target used to set the mboxFiles list. 583 Either the value must be C{None} or each element must be an C{MboxFile}. 584 @raise ValueError: If the value is not an C{MboxFile} 585 """ 586 if value is None: 587 self._mboxFiles = None 588 else: 589 try: 590 saved = self._mboxFiles 591 self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") 592 self._mboxFiles.extend(value) 593 except Exception, e: 594 self._mboxFiles = saved 595 raise e
    596
    597 - def _getMboxFiles(self):
    598 """ 599 Property target used to get the mboxFiles list. 600 """ 601 return self._mboxFiles
    602
    603 - def _setMboxDirs(self, value):
    604 """ 605 Property target used to set the mboxDirs list. 606 Either the value must be C{None} or each element must be an C{MboxDir}. 607 @raise ValueError: If the value is not an C{MboxDir} 608 """ 609 if value is None: 610 self._mboxDirs = None 611 else: 612 try: 613 saved = self._mboxDirs 614 self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") 615 self._mboxDirs.extend(value) 616 except Exception, e: 617 self._mboxDirs = saved 618 raise e
    619
    620 - def _getMboxDirs(self):
    621 """ 622 Property target used to get the mboxDirs list. 623 """ 624 return self._mboxDirs
    625 626 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 627 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 628 mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") 629 mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.")
    630
    631 632 ######################################################################## 633 # LocalConfig class definition 634 ######################################################################## 635 636 -class LocalConfig(object):
    637 638 """ 639 Class representing this extension's configuration document. 640 641 This is not a general-purpose configuration object like the main Cedar 642 Backup configuration object. Instead, it just knows how to parse and emit 643 Mbox-specific configuration values. Third parties who need to read and 644 write configuration related to this extension should access it through the 645 constructor, C{validate} and C{addConfig} methods. 646 647 @note: Lists within this class are "unordered" for equality comparisons. 648 649 @sort: __init__, __repr__, __str__, __cmp__, mbox, validate, addConfig 650 """ 651
    652 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    653 """ 654 Initializes a configuration object. 655 656 If you initialize the object without passing either C{xmlData} or 657 C{xmlPath} then configuration will be empty and will be invalid until it 658 is filled in properly. 659 660 No reference to the original XML data or original path is saved off by 661 this class. Once the data has been parsed (successfully or not) this 662 original information is discarded. 663 664 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 665 method will be called (with its default arguments) against configuration 666 after successfully parsing any passed-in XML. Keep in mind that even if 667 C{validate} is C{False}, it might not be possible to parse the passed-in 668 XML document if lower-level validations fail. 669 670 @note: It is strongly suggested that the C{validate} option always be set 671 to C{True} (the default) unless there is a specific need to read in 672 invalid configuration from disk. 673 674 @param xmlData: XML data representing configuration. 675 @type xmlData: String data. 676 677 @param xmlPath: Path to an XML file on disk. 678 @type xmlPath: Absolute path to a file on disk. 679 680 @param validate: Validate the document after parsing it. 681 @type validate: Boolean true/false. 682 683 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 684 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 685 @raise ValueError: If the parsed configuration document is not valid. 686 """ 687 self._mbox = None 688 self.mbox = None 689 if xmlData is not None and xmlPath is not None: 690 raise ValueError("Use either xmlData or xmlPath, but not both.") 691 if xmlData is not None: 692 self._parseXmlData(xmlData) 693 if validate: 694 self.validate() 695 elif xmlPath is not None: 696 xmlData = open(xmlPath).read() 697 self._parseXmlData(xmlData) 698 if validate: 699 self.validate()
    700
    701 - def __repr__(self):
    702 """ 703 Official string representation for class instance. 704 """ 705 return "LocalConfig(%s)" % (self.mbox)
    706
    707 - def __str__(self):
    708 """ 709 Informal string representation for class instance. 710 """ 711 return self.__repr__()
    712
    713 - def __cmp__(self, other):
    714 """ 715 Definition of equals operator for this class. 716 Lists within this class are "unordered" for equality comparisons. 717 @param other: Other object to compare to. 718 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 719 """ 720 if other is None: 721 return 1 722 if self.mbox != other.mbox: 723 if self.mbox < other.mbox: 724 return -1 725 else: 726 return 1 727 return 0
    728
    729 - def _setMbox(self, value):
    730 """ 731 Property target used to set the mbox configuration value. 732 If not C{None}, the value must be a C{MboxConfig} object. 733 @raise ValueError: If the value is not a C{MboxConfig} 734 """ 735 if value is None: 736 self._mbox = None 737 else: 738 if not isinstance(value, MboxConfig): 739 raise ValueError("Value must be a C{MboxConfig} object.") 740 self._mbox = value
    741
    742 - def _getMbox(self):
    743 """ 744 Property target used to get the mbox configuration value. 745 """ 746 return self._mbox
    747 748 mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") 749
    750 - def validate(self):
    751 """ 752 Validates configuration represented by the object. 753 754 Mbox configuration must be filled in. Within that, the collect mode and 755 compress mode are both optional, but the list of repositories must 756 contain at least one entry. 757 758 Each configured file or directory must contain an absolute path, and then 759 must be either able to take collect mode and compress mode configuration 760 from the parent C{MboxConfig} object, or must set each value on its own. 761 762 @raise ValueError: If one of the validations fails. 763 """ 764 if self.mbox is None: 765 raise ValueError("Mbox section is required.") 766 if ((self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ 767 (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1)): 768 raise ValueError("At least one mbox file or directory must be configured.") 769 if self.mbox.mboxFiles is not None: 770 for mboxFile in self.mbox.mboxFiles: 771 if mboxFile.absolutePath is None: 772 raise ValueError("Each mbox file must set an absolute path.") 773 if self.mbox.collectMode is None and mboxFile.collectMode is None: 774 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") 775 if self.mbox.compressMode is None and mboxFile.compressMode is None: 776 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") 777 if self.mbox.mboxDirs is not None: 778 for mboxDir in self.mbox.mboxDirs: 779 if mboxDir.absolutePath is None: 780 raise ValueError("Each mbox directory must set an absolute path.") 781 if self.mbox.collectMode is None and mboxDir.collectMode is None: 782 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") 783 if self.mbox.compressMode is None and mboxDir.compressMode is None: 784 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.")
    785
    786 - def addConfig(self, xmlDom, parentNode):
    787 """ 788 Adds an <mbox> configuration section as the next child of a parent. 789 790 Third parties should use this function to write configuration related to 791 this extension. 792 793 We add the following fields to the document:: 794 795 collectMode //cb_config/mbox/collectMode 796 compressMode //cb_config/mbox/compressMode 797 798 We also add groups of the following items, one list element per 799 item:: 800 801 mboxFiles //cb_config/mbox/file 802 mboxDirs //cb_config/mbox/dir 803 804 The mbox files and mbox directories are added by L{_addMboxFile} and 805 L{_addMboxDir}. 806 807 @param xmlDom: DOM tree as from C{impl.createDocument()}. 808 @param parentNode: Parent that the section should be appended to. 809 """ 810 if self.mbox is not None: 811 sectionNode = addContainerNode(xmlDom, parentNode, "mbox") 812 addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) 813 addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) 814 if self.mbox.mboxFiles is not None: 815 for mboxFile in self.mbox.mboxFiles: 816 LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) 817 if self.mbox.mboxDirs is not None: 818 for mboxDir in self.mbox.mboxDirs: 819 LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir)
    820
    821 - def _parseXmlData(self, xmlData):
    822 """ 823 Internal method to parse an XML string into the object. 824 825 This method parses the XML document into a DOM tree (C{xmlDom}) and then 826 calls a static method to parse the mbox configuration section. 827 828 @param xmlData: XML data to be parsed 829 @type xmlData: String data 830 831 @raise ValueError: If the XML cannot be successfully parsed. 832 """ 833 (xmlDom, parentNode) = createInputDom(xmlData) 834 self._mbox = LocalConfig._parseMbox(parentNode)
    835 836 @staticmethod
    837 - def _parseMbox(parent):
    838 """ 839 Parses an mbox configuration section. 840 841 We read the following individual fields:: 842 843 collectMode //cb_config/mbox/collect_mode 844 compressMode //cb_config/mbox/compress_mode 845 846 We also read groups of the following item, one list element per 847 item:: 848 849 mboxFiles //cb_config/mbox/file 850 mboxDirs //cb_config/mbox/dir 851 852 The mbox files are parsed by L{_parseMboxFiles} and the mbox 853 directories are parsed by L{_parseMboxDirs}. 854 855 @param parent: Parent node to search beneath. 856 857 @return: C{MboxConfig} object or C{None} if the section does not exist. 858 @raise ValueError: If some filled-in value is invalid. 859 """ 860 mbox = None 861 section = readFirstChild(parent, "mbox") 862 if section is not None: 863 mbox = MboxConfig() 864 mbox.collectMode = readString(section, "collect_mode") 865 mbox.compressMode = readString(section, "compress_mode") 866 mbox.mboxFiles = LocalConfig._parseMboxFiles(section) 867 mbox.mboxDirs = LocalConfig._parseMboxDirs(section) 868 return mbox
    869 870 @staticmethod
    871 - def _parseMboxFiles(parent):
    872 """ 873 Reads a list of C{MboxFile} objects from immediately beneath the parent. 874 875 We read the following individual fields:: 876 877 absolutePath abs_path 878 collectMode collect_mode 879 compressMode compess_mode 880 881 @param parent: Parent node to search beneath. 882 883 @return: List of C{MboxFile} objects or C{None} if none are found. 884 @raise ValueError: If some filled-in value is invalid. 885 """ 886 lst = [] 887 for entry in readChildren(parent, "file"): 888 if isElement(entry): 889 mboxFile = MboxFile() 890 mboxFile.absolutePath = readString(entry, "abs_path") 891 mboxFile.collectMode = readString(entry, "collect_mode") 892 mboxFile.compressMode = readString(entry, "compress_mode") 893 lst.append(mboxFile) 894 if lst == []: 895 lst = None 896 return lst
    897 898 @staticmethod
    899 - def _parseMboxDirs(parent):
    900 """ 901 Reads a list of C{MboxDir} objects from immediately beneath the parent. 902 903 We read the following individual fields:: 904 905 absolutePath abs_path 906 collectMode collect_mode 907 compressMode compess_mode 908 909 We also read groups of the following items, one list element per 910 item:: 911 912 relativeExcludePaths exclude/rel_path 913 excludePatterns exclude/pattern 914 915 The exclusions are parsed by L{_parseExclusions}. 916 917 @param parent: Parent node to search beneath. 918 919 @return: List of C{MboxDir} objects or C{None} if none are found. 920 @raise ValueError: If some filled-in value is invalid. 921 """ 922 lst = [] 923 for entry in readChildren(parent, "dir"): 924 if isElement(entry): 925 mboxDir = MboxDir() 926 mboxDir.absolutePath = readString(entry, "abs_path") 927 mboxDir.collectMode = readString(entry, "collect_mode") 928 mboxDir.compressMode = readString(entry, "compress_mode") 929 (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) 930 lst.append(mboxDir) 931 if lst == []: 932 lst = None 933 return lst
    934 935 @staticmethod
    936 - def _parseExclusions(parentNode):
    937 """ 938 Reads exclusions data from immediately beneath the parent. 939 940 We read groups of the following items, one list element per item:: 941 942 relative exclude/rel_path 943 patterns exclude/pattern 944 945 If there are none of some pattern (i.e. no relative path items) then 946 C{None} will be returned for that item in the tuple. 947 948 @param parentNode: Parent node to search beneath. 949 950 @return: Tuple of (relative, patterns) exclusions. 951 """ 952 section = readFirstChild(parentNode, "exclude") 953 if section is None: 954 return (None, None) 955 else: 956 relative = readStringList(section, "rel_path") 957 patterns = readStringList(section, "pattern") 958 return (relative, patterns)
    959 960 @staticmethod
    961 - def _addMboxFile(xmlDom, parentNode, mboxFile):
    962 """ 963 Adds an mbox file container as the next child of a parent. 964 965 We add the following fields to the document:: 966 967 absolutePath file/abs_path 968 collectMode file/collect_mode 969 compressMode file/compress_mode 970 971 The <file> node itself is created as the next child of the parent node. 972 This method only adds one mbox file node. The parent must loop for each 973 mbox file in the C{MboxConfig} object. 974 975 If C{mboxFile} is C{None}, this method call will be a no-op. 976 977 @param xmlDom: DOM tree as from C{impl.createDocument()}. 978 @param parentNode: Parent that the section should be appended to. 979 @param mboxFile: MboxFile to be added to the document. 980 """ 981 if mboxFile is not None: 982 sectionNode = addContainerNode(xmlDom, parentNode, "file") 983 addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) 984 addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) 985 addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode)
    986 987 @staticmethod
    988 - def _addMboxDir(xmlDom, parentNode, mboxDir):
    989 """ 990 Adds an mbox directory container as the next child of a parent. 991 992 We add the following fields to the document:: 993 994 absolutePath dir/abs_path 995 collectMode dir/collect_mode 996 compressMode dir/compress_mode 997 998 We also add groups of the following items, one list element per item:: 999 1000 relativeExcludePaths dir/exclude/rel_path 1001 excludePatterns dir/exclude/pattern 1002 1003 The <dir> node itself is created as the next child of the parent node. 1004 This method only adds one mbox directory node. The parent must loop for 1005 each mbox directory in the C{MboxConfig} object. 1006 1007 If C{mboxDir} is C{None}, this method call will be a no-op. 1008 1009 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1010 @param parentNode: Parent that the section should be appended to. 1011 @param mboxDir: MboxDir to be added to the document. 1012 """ 1013 if mboxDir is not None: 1014 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 1015 addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) 1016 addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) 1017 addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) 1018 if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or 1019 (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): 1020 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1021 if mboxDir.relativeExcludePaths is not None: 1022 for relativePath in mboxDir.relativeExcludePaths: 1023 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1024 if mboxDir.excludePatterns is not None: 1025 for pattern in mboxDir.excludePatterns: 1026 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1027
    1028 1029 ######################################################################## 1030 # Public functions 1031 ######################################################################## 1032 1033 ########################### 1034 # executeAction() function 1035 ########################### 1036 1037 -def executeAction(configPath, options, config):
    1038 """ 1039 Executes the mbox backup action. 1040 1041 @param configPath: Path to configuration file on disk. 1042 @type configPath: String representing a path on disk. 1043 1044 @param options: Program command-line options. 1045 @type options: Options object. 1046 1047 @param config: Program configuration. 1048 @type config: Config object. 1049 1050 @raise ValueError: Under many generic error conditions 1051 @raise IOError: If a backup could not be written for some reason. 1052 """ 1053 logger.debug("Executing mbox extended action.") 1054 newRevision = datetime.datetime.today() # mark here so all actions are after this date/time 1055 if config.options is None or config.collect is None: 1056 raise ValueError("Cedar Backup configuration is not properly filled in.") 1057 local = LocalConfig(xmlPath=configPath) 1058 todayIsStart = isStartOfWeek(config.options.startingDay) 1059 fullBackup = options.full or todayIsStart 1060 logger.debug("Full backup flag is [%s]" % fullBackup) 1061 if local.mbox.mboxFiles is not None: 1062 for mboxFile in local.mbox.mboxFiles: 1063 logger.debug("Working with mbox file [%s]" % mboxFile.absolutePath) 1064 collectMode = _getCollectMode(local, mboxFile) 1065 compressMode = _getCompressMode(local, mboxFile) 1066 lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) 1067 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1068 logger.debug("Mbox file meets criteria to be backed up today.") 1069 _backupMboxFile(config, mboxFile.absolutePath, fullBackup, 1070 collectMode, compressMode, lastRevision, newRevision) 1071 else: 1072 logger.debug("Mbox file will not be backed up, per collect mode.") 1073 if collectMode == 'incr': 1074 _writeNewRevision(config, mboxFile, newRevision) 1075 if local.mbox.mboxDirs is not None: 1076 for mboxDir in local.mbox.mboxDirs: 1077 logger.debug("Working with mbox directory [%s]" % mboxDir.absolutePath) 1078 collectMode = _getCollectMode(local, mboxDir) 1079 compressMode = _getCompressMode(local, mboxDir) 1080 lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) 1081 (excludePaths, excludePatterns) = _getExclusions(mboxDir) 1082 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1083 logger.debug("Mbox directory meets criteria to be backed up today.") 1084 _backupMboxDir(config, mboxDir.absolutePath, 1085 fullBackup, collectMode, compressMode, 1086 lastRevision, newRevision, 1087 excludePaths, excludePatterns) 1088 else: 1089 logger.debug("Mbox directory will not be backed up, per collect mode.") 1090 if collectMode == 'incr': 1091 _writeNewRevision(config, mboxDir, newRevision) 1092 logger.info("Executed the mbox extended action successfully.")
    1093
    1094 -def _getCollectMode(local, item):
    1095 """ 1096 Gets the collect mode that should be used for an mbox file or directory. 1097 Use file- or directory-specific value if possible, otherwise take from mbox section. 1098 @param local: LocalConfig object. 1099 @param item: Mbox file or directory 1100 @return: Collect mode to use. 1101 """ 1102 if item.collectMode is None: 1103 collectMode = local.mbox.collectMode 1104 else: 1105 collectMode = item.collectMode 1106 logger.debug("Collect mode is [%s]" % collectMode) 1107 return collectMode
    1108
    1109 -def _getCompressMode(local, item):
    1110 """ 1111 Gets the compress mode that should be used for an mbox file or directory. 1112 Use file- or directory-specific value if possible, otherwise take from mbox section. 1113 @param local: LocalConfig object. 1114 @param item: Mbox file or directory 1115 @return: Compress mode to use. 1116 """ 1117 if item.compressMode is None: 1118 compressMode = local.mbox.compressMode 1119 else: 1120 compressMode = item.compressMode 1121 logger.debug("Compress mode is [%s]" % compressMode) 1122 return compressMode
    1123
    1124 -def _getRevisionPath(config, item):
    1125 """ 1126 Gets the path to the revision file associated with a repository. 1127 @param config: Cedar Backup configuration. 1128 @param item: Mbox file or directory 1129 @return: Absolute path to the revision file associated with the repository. 1130 """ 1131 normalized = buildNormalizedPath(item.absolutePath) 1132 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1133 revisionPath = os.path.join(config.options.workingDir, filename) 1134 logger.debug("Revision file path is [%s]" % revisionPath) 1135 return revisionPath
    1136
    1137 -def _loadLastRevision(config, item, fullBackup, collectMode):
    1138 """ 1139 Loads the last revision date for this item from disk and returns it. 1140 1141 If this is a full backup, or if the revision file cannot be loaded for some 1142 reason, then C{None} is returned. This indicates that there is no previous 1143 revision, so the entire mail file or directory should be backed up. 1144 1145 @note: We write the actual revision object to disk via pickle, so we don't 1146 deal with the datetime precision or format at all. Whatever's in the object 1147 is what we write. 1148 1149 @param config: Cedar Backup configuration. 1150 @param item: Mbox file or directory 1151 @param fullBackup: Indicates whether this is a full backup 1152 @param collectMode: Indicates the collect mode for this item 1153 1154 @return: Revision date as a datetime.datetime object or C{None}. 1155 """ 1156 revisionPath = _getRevisionPath(config, item) 1157 if fullBackup: 1158 revisionDate = None 1159 logger.debug("Revision file ignored because this is a full backup.") 1160 elif collectMode in ['weekly', 'daily']: 1161 revisionDate = None 1162 logger.debug("No revision file based on collect mode [%s]." % collectMode) 1163 else: 1164 logger.debug("Revision file will be used for non-full incremental backup.") 1165 if not os.path.isfile(revisionPath): 1166 revisionDate = None 1167 logger.debug("Revision file [%s] does not exist on disk." % revisionPath) 1168 else: 1169 try: 1170 revisionDate = pickle.load(open(revisionPath, "r")) 1171 logger.debug("Loaded revision file [%s] from disk: [%s]" % (revisionPath, revisionDate)) 1172 except: 1173 revisionDate = None 1174 logger.error("Failed loading revision file [%s] from disk." % revisionPath) 1175 return revisionDate
    1176
    1177 -def _writeNewRevision(config, item, newRevision):
    1178 """ 1179 Writes new revision information to disk. 1180 1181 If we can't write the revision file successfully for any reason, we'll log 1182 the condition but won't throw an exception. 1183 1184 @note: We write the actual revision object to disk via pickle, so we don't 1185 deal with the datetime precision or format at all. Whatever's in the object 1186 is what we write. 1187 1188 @param config: Cedar Backup configuration. 1189 @param item: Mbox file or directory 1190 @param newRevision: Revision date as a datetime.datetime object. 1191 """ 1192 revisionPath = _getRevisionPath(config, item) 1193 try: 1194 pickle.dump(newRevision, open(revisionPath, "w")) 1195 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1196 logger.debug("Wrote new revision file [%s] to disk: [%s]" % (revisionPath, newRevision)) 1197 except: 1198 logger.error("Failed to write revision file [%s] to disk." % revisionPath)
    1199
    1200 -def _getExclusions(mboxDir):
    1201 """ 1202 Gets exclusions (file and patterns) associated with an mbox directory. 1203 1204 The returned files value is a list of absolute paths to be excluded from the 1205 backup for a given directory. It is derived from the mbox directory's 1206 relative exclude paths. 1207 1208 The returned patterns value is a list of patterns to be excluded from the 1209 backup for a given directory. It is derived from the mbox directory's list 1210 of patterns. 1211 1212 @param mboxDir: Mbox directory object. 1213 1214 @return: Tuple (files, patterns) indicating what to exclude. 1215 """ 1216 paths = [] 1217 if mboxDir.relativeExcludePaths is not None: 1218 for relativePath in mboxDir.relativeExcludePaths: 1219 paths.append(os.path.join(mboxDir.absolutePath, relativePath)) 1220 patterns = [] 1221 if mboxDir.excludePatterns is not None: 1222 patterns.extend(mboxDir.excludePatterns) 1223 logger.debug("Exclude paths: %s" % paths) 1224 logger.debug("Exclude patterns: %s" % patterns) 1225 return(paths, patterns)
    1226
    1227 -def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None):
    1228 """ 1229 Gets the backup file path (including correct extension) associated with an mbox path. 1230 1231 We assume that if the target directory is passed in, that we're backing up a 1232 directory. Under these circumstances, we'll just use the basename of the 1233 individual path as the output file. 1234 1235 @note: The backup path only contains the current date in YYYYMMDD format, 1236 but that's OK because the index information (stored elsewhere) is the actual 1237 date object. 1238 1239 @param config: Cedar Backup configuration. 1240 @param mboxPath: Path to the indicated mbox file or directory 1241 @param compressMode: Compress mode to use for this mbox path 1242 @param newRevision: Revision this backup path represents 1243 @param targetDir: Target directory in which the path should exist 1244 1245 @return: Absolute path to the backup file associated with the repository. 1246 """ 1247 if targetDir is None: 1248 normalizedPath = buildNormalizedPath(mboxPath) 1249 revisionDate = newRevision.strftime("%Y%m%d") 1250 filename = "mbox-%s-%s" % (revisionDate, normalizedPath) 1251 else: 1252 filename = os.path.basename(mboxPath) 1253 if compressMode == 'gzip': 1254 filename = "%s.gz" % filename 1255 elif compressMode == 'bzip2': 1256 filename = "%s.bz2" % filename 1257 if targetDir is None: 1258 backupPath = os.path.join(config.collect.targetDir, filename) 1259 else: 1260 backupPath = os.path.join(targetDir, filename) 1261 logger.debug("Backup file path is [%s]" % backupPath) 1262 return backupPath
    1263
    1264 -def _getTarfilePath(config, mboxPath, compressMode, newRevision):
    1265 """ 1266 Gets the tarfile backup file path (including correct extension) associated 1267 with an mbox path. 1268 1269 Along with the path, the tar archive mode is returned in a form that can 1270 be used with L{BackupFileList.generateTarfile}. 1271 1272 @note: The tarfile path only contains the current date in YYYYMMDD format, 1273 but that's OK because the index information (stored elsewhere) is the actual 1274 date object. 1275 1276 @param config: Cedar Backup configuration. 1277 @param mboxPath: Path to the indicated mbox file or directory 1278 @param compressMode: Compress mode to use for this mbox path 1279 @param newRevision: Revision this backup path represents 1280 1281 @return: Tuple of (absolute path to tarfile, tar archive mode) 1282 """ 1283 normalizedPath = buildNormalizedPath(mboxPath) 1284 revisionDate = newRevision.strftime("%Y%m%d") 1285 filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) 1286 if compressMode == 'gzip': 1287 filename = "%s.gz" % filename 1288 archiveMode = "targz" 1289 elif compressMode == 'bzip2': 1290 filename = "%s.bz2" % filename 1291 archiveMode = "tarbz2" 1292 else: 1293 archiveMode = "tar" 1294 tarfilePath = os.path.join(config.collect.targetDir, filename) 1295 logger.debug("Tarfile path is [%s]" % tarfilePath) 1296 return (tarfilePath, archiveMode)
    1297
    1298 -def _getOutputFile(backupPath, compressMode):
    1299 """ 1300 Opens the output file used for saving backup information. 1301 1302 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1303 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1304 return an object from the normal C{open()} method. 1305 1306 @param backupPath: Path to file to open. 1307 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1308 1309 @return: Output file object. 1310 """ 1311 if compressMode == "gzip": 1312 return GzipFile(backupPath, "w") 1313 elif compressMode == "bzip2": 1314 return BZ2File(backupPath, "w") 1315 else: 1316 return open(backupPath, "w")
    1317
    1318 -def _backupMboxFile(config, absolutePath, 1319 fullBackup, collectMode, compressMode, 1320 lastRevision, newRevision, targetDir=None):
    1321 """ 1322 Backs up an individual mbox file. 1323 1324 @param config: Cedar Backup configuration. 1325 @param absolutePath: Path to mbox file to back up. 1326 @param fullBackup: Indicates whether this should be a full backup. 1327 @param collectMode: Indicates the collect mode for this item 1328 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1329 @param lastRevision: Date of last backup as datetime.datetime 1330 @param newRevision: Date of new (current) backup as datetime.datetime 1331 @param targetDir: Target directory to write the backed-up file into 1332 1333 @raise ValueError: If some value is missing or invalid. 1334 @raise IOError: If there is a problem backing up the mbox file. 1335 """ 1336 backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) 1337 outputFile = _getOutputFile(backupPath, compressMode) 1338 if fullBackup or collectMode != "incr" or lastRevision is None: 1339 args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox 1340 else: 1341 revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() 1342 args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] 1343 command = resolveCommand(GREPMAIL_COMMAND) 1344 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 1345 if result != 0: 1346 raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) 1347 logger.debug("Completed backing up mailbox [%s]." % absolutePath) 1348 return backupPath
    1349
    1350 -def _backupMboxDir(config, absolutePath, 1351 fullBackup, collectMode, compressMode, 1352 lastRevision, newRevision, 1353 excludePaths, excludePatterns):
    1354 """ 1355 Backs up a directory containing mbox files. 1356 1357 @param config: Cedar Backup configuration. 1358 @param absolutePath: Path to mbox directory to back up. 1359 @param fullBackup: Indicates whether this should be a full backup. 1360 @param collectMode: Indicates the collect mode for this item 1361 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1362 @param lastRevision: Date of last backup as datetime.datetime 1363 @param newRevision: Date of new (current) backup as datetime.datetime 1364 @param excludePaths: List of absolute paths to exclude. 1365 @param excludePatterns: List of patterns to exclude. 1366 1367 @raise ValueError: If some value is missing or invalid. 1368 @raise IOError: If there is a problem backing up the mbox file. 1369 """ 1370 try: 1371 tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) 1372 mboxList = FilesystemList() 1373 mboxList.excludeDirs = True 1374 mboxList.excludePaths = excludePaths 1375 mboxList.excludePatterns = excludePatterns 1376 mboxList.addDirContents(absolutePath, recursive=False) 1377 tarList = BackupFileList() 1378 for item in mboxList: 1379 backupPath = _backupMboxFile(config, item, fullBackup, 1380 collectMode, "none", # no need to compress inside compressed tar 1381 lastRevision, newRevision, 1382 targetDir=tmpdir) 1383 tarList.addFile(backupPath) 1384 (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) 1385 tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) 1386 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 1387 logger.debug("Completed backing up directory [%s]." % absolutePath) 1388 finally: 1389 try: 1390 for item in tarList: 1391 if os.path.exists(item): 1392 try: 1393 os.remove(item) 1394 except: pass 1395 except: pass 1396 try: 1397 os.rmdir(tmpdir) 1398 except: pass
    1399

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.collect-module.html0000664000175000017500000012673612143054362027753 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.collect
    Package CedarBackup2 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Module collect

    source code

    Implements the standard 'collect' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeCollect(configPath, options, config)
    Executes the collect backup action.
    source code
     
    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Collects a configured collect file.
    source code
     
    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)
    Collects a configured collect directory.
    source code
     
    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Execute the backup process for the indicated backup list.
    source code
     
    _loadDigest(digestPath)
    Loads the indicated digest path from disk into a dictionary.
    source code
     
    _writeDigest(config, digest, digestPath)
    Writes the digest dictionary to the indicated digest path on disk.
    source code
     
    _getCollectMode(config, item)
    Gets the collect mode that should be used for a collect directory or file.
    source code
     
    _getArchiveMode(config, item)
    Gets the archive mode that should be used for a collect directory or file.
    source code
     
    _getIgnoreFile(config, item)
    Gets the ignore file that should be used for a collect directory or file.
    source code
     
    _getLinkDepth(item)
    Gets the link depth that should be used for a collect directory.
    source code
     
    _getDereference(item)
    Gets the dereference flag that should be used for a collect directory.
    source code
     
    _getRecursionLevel(item)
    Gets the recursion level that should be used for a collect directory.
    source code
     
    _getDigestPath(config, absolutePath)
    Gets the digest path associated with a collect directory or file.
    source code
     
    _getTarfilePath(config, absolutePath, archiveMode)
    Gets the tarfile path (including correct extension) associated with a collect directory.
    source code
     
    _getExclusions(config, collectDir)
    Gets exclusions (file and patterns) associated with a collect directory.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.collect")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeCollect(configPath, options, config)

    source code 

    Executes the collect backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • TarError - If there is a problem creating a tar file

    Note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged.

    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Collects a configured collect file.

    The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)

    source code 

    Collects a configured collect directory.

    The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of directory to collect.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • ignoreFile - Ignore file to use.
    • linkDepth - Link depth value to use.
    • dereference - Dereference flag to use.
    • resetDigest - Reset digest flag.
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    • recursionLevel - Recursion level (zero for no recursion)

    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Execute the backup process for the indicated backup list.

    This function exists mainly to consolidate functionality between the _collectFile and _collectDirectory functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments.

    For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand.

    Parameters:
    • config - Config object.
    • backupList - List to execute backup for
    • absolutePath - Absolute path of directory or file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _loadDigest(digestPath)

    source code 

    Loads the indicated digest path from disk into a dictionary.

    If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged.

    Parameters:
    • digestPath - Path to the digest file on disk.
    Returns:
    Dictionary representing contents of digest path.

    _writeDigest(config, digest, digestPath)

    source code 

    Writes the digest dictionary to the indicated digest path on disk.

    If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • digest - Digest dictionary to write to disk.
    • digestPath - Path to the digest file on disk.

    _getCollectMode(config, item)

    source code 

    Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Collect mode to use.

    _getArchiveMode(config, item)

    source code 

    Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Archive mode to use.

    _getIgnoreFile(config, item)

    source code 

    Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Ignore file to use.

    _getLinkDepth(item)

    source code 

    Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Link depth to use.

    _getDereference(item)

    source code 

    Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False.

    Parameters:
    • item - CollectDir object
    Returns:
    Dereference flag to use.

    _getRecursionLevel(item)

    source code 

    Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Recursion level to use.

    _getDigestPath(config, absolutePath)

    source code 

    Gets the digest path associated with a collect directory or file.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate digest for
    Returns:
    Absolute path to the digest associated with the collect directory or file.

    _getTarfilePath(config, absolutePath, archiveMode)

    source code 

    Gets the tarfile path (including correct extension) associated with a collect directory.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate tarfile for
    • archiveMode - Archive mode to use for this tarfile.
    Returns:
    Absolute path to the tarfile associated with the collect directory.

    _getExclusions(config, collectDir)

    source code 

    Gets exclusions (file and patterns) associated with a collect directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself.

    Parameters:
    • config - Config object.
    • collectDir - Collect directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox.MboxFile-class.html0000664000175000017500000006640512143054363030443 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxFile
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxFile
    [hide private]
    [frames] | no frames]

    Class MboxFile

    source code

    object --+
             |
            MboxFile
    

    Class representing mbox file configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    Constructor for the MboxFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox file.
      collectMode
    Overridden collect mode for this mbox file.
      compressMode
    Overridden compress mode for this mbox file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the MboxFile class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to an mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path to the mbox file.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox file.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.knapsack-pysrc.html0000664000175000017500000021276712143054364026337 0ustar pronovicpronovic00000000000000 CedarBackup2.knapsack
    Package CedarBackup2 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.knapsack

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: knapsack.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Provides knapsack algorithms used for "fit" decisions 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######## 
     36  # Notes 
     37  ######## 
     38   
     39  """ 
     40  Provides the implementation for various knapsack algorithms. 
     41   
     42  Knapsack algorithms are "fit" algorithms, used to take a set of "things" and 
     43  decide on the optimal way to fit them into some container.  The focus of this 
     44  code is to fit files onto a disc, although the interface (in terms of item, 
     45  item size and capacity size, with no units) is generic enough that it can 
     46  be applied to items other than files. 
     47   
     48  All of the algorithms implemented below assume that "optimal" means "use up as 
     49  much of the disc's capacity as possible", but each produces slightly different 
     50  results.  For instance, the best fit and first fit algorithms tend to include 
     51  fewer files than the worst fit and alternate fit algorithms, even if they use 
     52  the disc space more efficiently. 
     53   
     54  Usually, for a given set of circumstances, it will be obvious to a human which 
     55  algorithm is the right one to use, based on trade-offs between number of files 
     56  included and ideal space utilization.  It's a little more difficult to do this 
     57  programmatically.  For Cedar Backup's purposes (i.e. trying to fit a small 
     58  number of collect-directory tarfiles onto a disc), worst-fit is probably the 
     59  best choice if the goal is to include as many of the collect directories as 
     60  possible. 
     61   
     62  @sort: firstFit, bestFit, worstFit, alternateFit 
     63   
     64  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     65  """ 
     66   
     67  ####################################################################### 
     68  # Public functions 
     69  ####################################################################### 
     70   
     71  ###################### 
     72  # firstFit() function 
     73  ###################### 
     74   
    
    75 -def firstFit(items, capacity):
    76 77 """ 78 Implements the first-fit knapsack algorithm. 79 80 The first-fit algorithm proceeds through an unsorted list of items until 81 running out of items or meeting capacity exactly. If capacity is exceeded, 82 the item that caused capacity to be exceeded is thrown away and the next one 83 is tried. This algorithm generally performs more poorly than the other 84 algorithms both in terms of capacity utilization and item utilization, but 85 can be as much as an order of magnitude faster on large lists of items 86 because it doesn't require any sorting. 87 88 The "size" values in the items and capacity arguments must be comparable, 89 but they are unitless from the perspective of this function. Zero-sized 90 items and capacity are considered degenerate cases. If capacity is zero, 91 no items fit, period, even if the items list contains zero-sized items. 92 93 The dictionary is indexed by its key, and then includes its key. This 94 seems kind of strange on first glance. It works this way to facilitate 95 easy sorting of the list on key if needed. 96 97 The function assumes that the list of items may be used destructively, if 98 needed. This avoids the overhead of having the function make a copy of the 99 list, if this is not required. Callers should pass C{items.copy()} if they 100 do not want their version of the list modified. 101 102 The function returns a list of chosen items and the unitless amount of 103 capacity used by the items. 104 105 @param items: Items to operate on 106 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 107 108 @param capacity: Capacity of container to fit to 109 @type capacity: integer 110 111 @returns: Tuple C{(items, used)} as described above 112 """ 113 114 # Use dict since insert into dict is faster than list append 115 included = { } 116 117 # Search the list as it stands (arbitrary order) 118 used = 0 119 remaining = capacity 120 for key in items.keys(): 121 if remaining == 0: 122 break 123 if remaining - items[key][1] >= 0: 124 included[key] = None 125 used += items[key][1] 126 remaining -= items[key][1] 127 128 # Return results 129 return (included.keys(), used)
    130 131 132 ##################### 133 # bestFit() function 134 ##################### 135
    136 -def bestFit(items, capacity):
    137 138 """ 139 Implements the best-fit knapsack algorithm. 140 141 The best-fit algorithm proceeds through a sorted list of items (sorted from 142 largest to smallest) until running out of items or meeting capacity exactly. 143 If capacity is exceeded, the item that caused capacity to be exceeded is 144 thrown away and the next one is tried. The algorithm effectively includes 145 the minimum number of items possible in its search for optimal capacity 146 utilization. For large lists of mixed-size items, it's not ususual to see 147 the algorithm achieve 100% capacity utilization by including fewer than 1% 148 of the items. Probably because it often has to look at fewer of the items 149 before completing, it tends to be a little faster than the worst-fit or 150 alternate-fit algorithms. 151 152 The "size" values in the items and capacity arguments must be comparable, 153 but they are unitless from the perspective of this function. Zero-sized 154 items and capacity are considered degenerate cases. If capacity is zero, 155 no items fit, period, even if the items list contains zero-sized items. 156 157 The dictionary is indexed by its key, and then includes its key. This 158 seems kind of strange on first glance. It works this way to facilitate 159 easy sorting of the list on key if needed. 160 161 The function assumes that the list of items may be used destructively, if 162 needed. This avoids the overhead of having the function make a copy of the 163 list, if this is not required. Callers should pass C{items.copy()} if they 164 do not want their version of the list modified. 165 166 The function returns a list of chosen items and the unitless amount of 167 capacity used by the items. 168 169 @param items: Items to operate on 170 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 171 172 @param capacity: Capacity of container to fit to 173 @type capacity: integer 174 175 @returns: Tuple C{(items, used)} as described above 176 """ 177 178 # Use dict since insert into dict is faster than list append 179 included = { } 180 181 # Sort the list from largest to smallest 182 itemlist = items.items() 183 itemlist.sort(lambda x, y: cmp(y[1][1], x[1][1])) # sort descending 184 keys = [] 185 for item in itemlist: 186 keys.append(item[0]) 187 188 # Search the list 189 used = 0 190 remaining = capacity 191 for key in keys: 192 if remaining == 0: 193 break 194 if remaining - items[key][1] >= 0: 195 included[key] = None 196 used += items[key][1] 197 remaining -= items[key][1] 198 199 # Return the results 200 return (included.keys(), used)
    201 202 203 ###################### 204 # worstFit() function 205 ###################### 206
    207 -def worstFit(items, capacity):
    208 209 """ 210 Implements the worst-fit knapsack algorithm. 211 212 The worst-fit algorithm proceeds through an a sorted list of items (sorted 213 from smallest to largest) until running out of items or meeting capacity 214 exactly. If capacity is exceeded, the item that caused capacity to be 215 exceeded is thrown away and the next one is tried. The algorithm 216 effectively includes the maximum number of items possible in its search for 217 optimal capacity utilization. It tends to be somewhat slower than either 218 the best-fit or alternate-fit algorithm, probably because on average it has 219 to look at more items before completing. 220 221 The "size" values in the items and capacity arguments must be comparable, 222 but they are unitless from the perspective of this function. Zero-sized 223 items and capacity are considered degenerate cases. If capacity is zero, 224 no items fit, period, even if the items list contains zero-sized items. 225 226 The dictionary is indexed by its key, and then includes its key. This 227 seems kind of strange on first glance. It works this way to facilitate 228 easy sorting of the list on key if needed. 229 230 The function assumes that the list of items may be used destructively, if 231 needed. This avoids the overhead of having the function make a copy of the 232 list, if this is not required. Callers should pass C{items.copy()} if they 233 do not want their version of the list modified. 234 235 The function returns a list of chosen items and the unitless amount of 236 capacity used by the items. 237 238 @param items: Items to operate on 239 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 240 241 @param capacity: Capacity of container to fit to 242 @type capacity: integer 243 244 @returns: Tuple C{(items, used)} as described above 245 """ 246 247 # Use dict since insert into dict is faster than list append 248 included = { } 249 250 # Sort the list from smallest to largest 251 itemlist = items.items() 252 itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending 253 keys = [] 254 for item in itemlist: 255 keys.append(item[0]) 256 257 # Search the list 258 used = 0 259 remaining = capacity 260 for key in keys: 261 if remaining == 0: 262 break 263 if remaining - items[key][1] >= 0: 264 included[key] = None 265 used += items[key][1] 266 remaining -= items[key][1] 267 268 # Return results 269 return (included.keys(), used)
    270 271 272 ########################## 273 # alternateFit() function 274 ########################## 275
    276 -def alternateFit(items, capacity):
    277 278 """ 279 Implements the alternate-fit knapsack algorithm. 280 281 This algorithm (which I'm calling "alternate-fit" as in "alternate from one 282 to the other") tries to balance small and large items to achieve better 283 end-of-disk performance. Instead of just working one direction through a 284 list, it alternately works from the start and end of a sorted list (sorted 285 from smallest to largest), throwing away any item which causes capacity to 286 be exceeded. The algorithm tends to be slower than the best-fit and 287 first-fit algorithms, and slightly faster than the worst-fit algorithm, 288 probably because of the number of items it considers on average before 289 completing. It often achieves slightly better capacity utilization than the 290 worst-fit algorithm, while including slighly fewer items. 291 292 The "size" values in the items and capacity arguments must be comparable, 293 but they are unitless from the perspective of this function. Zero-sized 294 items and capacity are considered degenerate cases. If capacity is zero, 295 no items fit, period, even if the items list contains zero-sized items. 296 297 The dictionary is indexed by its key, and then includes its key. This 298 seems kind of strange on first glance. It works this way to facilitate 299 easy sorting of the list on key if needed. 300 301 The function assumes that the list of items may be used destructively, if 302 needed. This avoids the overhead of having the function make a copy of the 303 list, if this is not required. Callers should pass C{items.copy()} if they 304 do not want their version of the list modified. 305 306 The function returns a list of chosen items and the unitless amount of 307 capacity used by the items. 308 309 @param items: Items to operate on 310 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 311 312 @param capacity: Capacity of container to fit to 313 @type capacity: integer 314 315 @returns: Tuple C{(items, used)} as described above 316 """ 317 318 # Use dict since insert into dict is faster than list append 319 included = { } 320 321 # Sort the list from smallest to largest 322 itemlist = items.items() 323 itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending 324 keys = [] 325 for item in itemlist: 326 keys.append(item[0]) 327 328 # Search the list 329 used = 0 330 remaining = capacity 331 332 front = keys[0:len(keys)/2] 333 back = keys[len(keys)/2:len(keys)] 334 back.reverse() 335 336 i = 0 337 j = 0 338 339 while remaining > 0 and (i < len(front) or j < len(back)): 340 if i < len(front): 341 if remaining - items[front[i]][1] >= 0: 342 included[front[i]] = None 343 used += items[front[i]][1] 344 remaining -= items[front[i]][1] 345 i += 1 346 if j < len(back): 347 if remaining - items[back[j]][1] >= 0: 348 included[back[j]] = None 349 used += items[back[j]][1] 350 remaining -= items[back[j]][1] 351 j += 1 352 353 # Return results 354 return (included.keys(), used)
    355

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.stage-pysrc.html0000664000175000017500000044554412143054365027310 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.stage
    Package CedarBackup2 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.stage

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: stage.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Implements the standard 'stage' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'stage' action. 
     41  @sort: executeStage 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import os 
     52  import time 
     53  import logging 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup2.peer import RemotePeer, LocalPeer 
     57  from CedarBackup2.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot 
     58  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     59  from CedarBackup2.actions.util import writeIndicatorFile 
     60   
     61   
     62  ######################################################################## 
     63  # Module-wide constants and variables 
     64  ######################################################################## 
     65   
     66  logger = logging.getLogger("CedarBackup2.log.actions.stage") 
     67   
     68   
     69  ######################################################################## 
     70  # Public functions 
     71  ######################################################################## 
     72   
     73  ########################## 
     74  # executeStage() function 
     75  ########################## 
     76   
    
    77 -def executeStage(configPath, options, config):
    78 """ 79 Executes the stage backup action. 80 81 @note: The daily directory is derived once and then we stick with it, just 82 in case a backup happens to span midnite. 83 84 @note: As portions of the stage action is complete, we will write various 85 indicator files so that it's obvious what actions have been completed. Each 86 peer gets a stage indicator in its collect directory, and then the master 87 gets a stage indicator in its daily staging directory. The store process 88 uses the master's stage indicator to decide whether a directory is ready to 89 be stored. Currently, nothing uses the indicator at each peer, and it 90 exists for reference only. 91 92 @param configPath: Path to configuration file on disk. 93 @type configPath: String representing a path on disk. 94 95 @param options: Program command-line options. 96 @type options: Options object. 97 98 @param config: Program configuration. 99 @type config: Config object. 100 101 @raise ValueError: Under many generic error conditions 102 @raise IOError: If there are problems reading or writing files. 103 """ 104 logger.debug("Executing the 'stage' action.") 105 if config.options is None or config.stage is None: 106 raise ValueError("Stage configuration is not properly filled in.") 107 dailyDir = _getDailyDir(config) 108 localPeers = _getLocalPeers(config) 109 remotePeers = _getRemotePeers(config) 110 allPeers = localPeers + remotePeers 111 stagingDirs = _createStagingDirs(config, dailyDir, allPeers) 112 for peer in allPeers: 113 logger.info("Staging peer [%s]." % peer.name) 114 ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) 115 if not peer.checkCollectIndicator(): 116 if not ignoreFailures: 117 logger.error("Peer [%s] was not ready to be staged." % peer.name) 118 else: 119 logger.info("Peer [%s] was not ready to be staged." % peer.name) 120 continue 121 logger.debug("Found collect indicator.") 122 targetDir = stagingDirs[peer.name] 123 if isRunningAsRoot(): 124 # Since we're running as root, we can change ownership 125 ownership = getUidGid(config.options.backupUser, config.options.backupGroup) 126 logger.debug("Using target dir [%s], ownership [%d:%d]." % (targetDir, ownership[0], ownership[1])) 127 else: 128 # Non-root cannot change ownership, so don't set it 129 ownership = None 130 logger.debug("Using target dir [%s], ownership [None]." % targetDir) 131 try: 132 count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask 133 logger.info("Staged %d files for peer [%s]." % (count, peer.name)) 134 peer.writeStageIndicator() 135 except (ValueError, IOError, OSError), e: 136 logger.error("Error staging [%s]: %s" % (peer.name, e)) 137 writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) 138 logger.info("Executed the 'stage' action successfully.")
    139 140 141 ######################################################################## 142 # Private utility functions 143 ######################################################################## 144 145 ################################ 146 # _createStagingDirs() function 147 ################################ 148
    149 -def _createStagingDirs(config, dailyDir, peers):
    150 """ 151 Creates staging directories as required. 152 153 The main staging directory is the passed in daily directory, something like 154 C{staging/2002/05/23}. Then, individual peers get their own directories, 155 i.e. C{staging/2002/05/23/host}. 156 157 @param config: Config object. 158 @param dailyDir: Daily staging directory. 159 @param peers: List of all configured peers. 160 161 @return: Dictionary mapping peer name to staging directory. 162 """ 163 mapping = {} 164 if os.path.isdir(dailyDir): 165 logger.warn("Staging directory [%s] already existed." % dailyDir) 166 else: 167 try: 168 logger.debug("Creating staging directory [%s]." % dailyDir) 169 os.makedirs(dailyDir) 170 for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: 171 changeOwnership(path, config.options.backupUser, config.options.backupGroup) 172 except Exception, e: 173 raise Exception("Unable to create staging directory: %s" % e) 174 for peer in peers: 175 peerDir = os.path.join(dailyDir, peer.name) 176 mapping[peer.name] = peerDir 177 if os.path.isdir(peerDir): 178 logger.warn("Peer staging directory [%s] already existed." % peerDir) 179 else: 180 try: 181 logger.debug("Creating peer staging directory [%s]." % peerDir) 182 os.makedirs(peerDir) 183 changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) 184 except Exception, e: 185 raise Exception("Unable to create staging directory: %s" % e) 186 return mapping
    187 188 189 ######################################################################## 190 # Private attribute "getter" functions 191 ######################################################################## 192 193 #################################### 194 # _getIgnoreFailuresFlag() function 195 #################################### 196
    197 -def _getIgnoreFailuresFlag(options, config, peer):
    198 """ 199 Gets the ignore failures flag based on options, configuration, and peer. 200 @param options: Options object 201 @param config: Configuration object 202 @param peer: Peer to check 203 @return: Whether to ignore stage failures for this peer 204 """ 205 logger.debug("Ignore failure mode for this peer: %s" % peer.ignoreFailureMode) 206 if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": 207 return False 208 elif peer.ignoreFailureMode == "all": 209 return True 210 else: 211 if options.full or isStartOfWeek(config.options.startingDay): 212 return peer.ignoreFailureMode == "weekly" 213 else: 214 return peer.ignoreFailureMode == "daily"
    215 216 217 ########################## 218 # _getDailyDir() function 219 ########################## 220
    221 -def _getDailyDir(config):
    222 """ 223 Gets the daily staging directory. 224 225 This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. 226 C{staging/2000/10/07}, except it will be an absolute path based on 227 C{config.stage.targetDir}. 228 229 @param config: Config object 230 231 @return: Path of daily staging directory. 232 """ 233 dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) 234 logger.debug("Daily staging directory is [%s]." % dailyDir) 235 return dailyDir
    236 237 238 ############################ 239 # _getLocalPeers() function 240 ############################ 241
    242 -def _getLocalPeers(config):
    243 """ 244 Return a list of L{LocalPeer} objects based on configuration. 245 @param config: Config object. 246 @return: List of L{LocalPeer} objects. 247 """ 248 localPeers = [] 249 configPeers = None 250 if config.stage.hasPeers(): 251 logger.debug("Using list of local peers from stage configuration.") 252 configPeers = config.stage.localPeers 253 elif config.peers is not None and config.peers.hasPeers(): 254 logger.debug("Using list of local peers from peers configuration.") 255 configPeers = config.peers.localPeers 256 if configPeers is not None: 257 for peer in configPeers: 258 localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) 259 localPeers.append(localPeer) 260 logger.debug("Found local peer: [%s]" % localPeer.name) 261 return localPeers
    262 263 264 ############################# 265 # _getRemotePeers() function 266 ############################# 267
    268 -def _getRemotePeers(config):
    269 """ 270 Return a list of L{RemotePeer} objects based on configuration. 271 @param config: Config object. 272 @return: List of L{RemotePeer} objects. 273 """ 274 remotePeers = [] 275 configPeers = None 276 if config.stage.hasPeers(): 277 logger.debug("Using list of remote peers from stage configuration.") 278 configPeers = config.stage.remotePeers 279 elif config.peers is not None and config.peers.hasPeers(): 280 logger.debug("Using list of remote peers from peers configuration.") 281 configPeers = config.peers.remotePeers 282 if configPeers is not None: 283 for peer in configPeers: 284 remoteUser = _getRemoteUser(config, peer) 285 localUser = _getLocalUser(config) 286 rcpCommand = _getRcpCommand(config, peer) 287 remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, 288 remoteUser, rcpCommand, localUser, 289 ignoreFailureMode=peer.ignoreFailureMode) 290 remotePeers.append(remotePeer) 291 logger.debug("Found remote peer: [%s]" % remotePeer.name) 292 return remotePeers
    293 294 295 ############################ 296 # _getRemoteUser() function 297 ############################ 298
    299 -def _getRemoteUser(config, remotePeer):
    300 """ 301 Gets the remote user associated with a remote peer. 302 Use peer's if possible, otherwise take from options section. 303 @param config: Config object. 304 @param remotePeer: Configuration-style remote peer object. 305 @return: Name of remote user associated with remote peer. 306 """ 307 if remotePeer.remoteUser is None: 308 return config.options.backupUser 309 return remotePeer.remoteUser
    310 311 312 ########################### 313 # _getLocalUser() function 314 ########################### 315
    316 -def _getLocalUser(config):
    317 """ 318 Gets the remote user associated with a remote peer. 319 @param config: Config object. 320 @return: Name of local user that should be used 321 """ 322 if not isRunningAsRoot(): 323 return None 324 return config.options.backupUser
    325 326 327 ############################ 328 # _getRcpCommand() function 329 ############################ 330
    331 -def _getRcpCommand(config, remotePeer):
    332 """ 333 Gets the RCP command associated with a remote peer. 334 Use peer's if possible, otherwise take from options section. 335 @param config: Config object. 336 @param remotePeer: Configuration-style remote peer object. 337 @return: RCP command associated with remote peer. 338 """ 339 if remotePeer.rcpCommand is None: 340 return config.options.rcpCommand 341 return remotePeer.rcpCommand
    342

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter-pysrc.html0000664000175000017500000110045412143054364030242 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.dvdwriter

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: dvdwriter.py 1041 2013-05-10 02:05:13Z pronovic $ 
     31  # Purpose  : Provides functionality related to DVD writer devices. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides functionality related to DVD writer devices. 
     41   
     42  @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     43   
     44  @var MEDIA_DVDPLUSR: Constant representing DVD+R media. 
     45  @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. 
     46   
     47  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     48  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     49  """ 
     50   
     51  ######################################################################## 
     52  # Imported modules 
     53  ######################################################################## 
     54   
     55  # System modules 
     56  import os 
     57  import re 
     58  import logging 
     59  import tempfile 
     60  import time 
     61   
     62  # Cedar Backup modules 
     63  from CedarBackup2.writers.util import IsoImage 
     64  from CedarBackup2.util import resolveCommand, executeCommand 
     65  from CedarBackup2.util import convertSize, displayBytes, encodePath 
     66  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES 
     67  from CedarBackup2.writers.util import validateDevice, validateDriveSpeed 
     68   
     69   
     70  ######################################################################## 
     71  # Module-wide constants and variables 
     72  ######################################################################## 
     73   
     74  logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter") 
     75   
     76  MEDIA_DVDPLUSR  = 1 
     77  MEDIA_DVDPLUSRW = 2 
     78   
     79  GROWISOFS_COMMAND = [ "growisofs", ] 
     80  EJECT_COMMAND     = [ "eject", ] 
    
    81 82 83 ######################################################################## 84 # MediaDefinition class definition 85 ######################################################################## 86 87 -class MediaDefinition(object):
    88 89 """ 90 Class encapsulating information about DVD media definitions. 91 92 The following media types are accepted: 93 94 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 95 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 96 97 Note that the capacity attribute returns capacity in terms of ISO sectors 98 (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer 99 functionality. 100 101 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 102 of 1024*1024*1024 bytes per gigabyte. 103 104 @sort: __init__, mediaType, rewritable, capacity 105 """ 106
    107 - def __init__(self, mediaType):
    108 """ 109 Creates a media definition for the indicated media type. 110 @param mediaType: Type of the media, as discussed above. 111 @raise ValueError: If the media type is unknown or unsupported. 112 """ 113 self._mediaType = None 114 self._rewritable = False 115 self._capacity = 0.0 116 self._setValues(mediaType)
    117
    118 - def _setValues(self, mediaType):
    119 """ 120 Sets values based on media type. 121 @param mediaType: Type of the media, as discussed above. 122 @raise ValueError: If the media type is unknown or unsupported. 123 """ 124 if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: 125 raise ValueError("Invalid media type %d." % mediaType) 126 self._mediaType = mediaType 127 if self._mediaType == MEDIA_DVDPLUSR: 128 self._rewritable = False 129 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB 130 elif self._mediaType == MEDIA_DVDPLUSRW: 131 self._rewritable = True 132 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB
    133
    134 - def _getMediaType(self):
    135 """ 136 Property target used to get the media type value. 137 """ 138 return self._mediaType
    139
    140 - def _getRewritable(self):
    141 """ 142 Property target used to get the rewritable flag value. 143 """ 144 return self._rewritable
    145
    146 - def _getCapacity(self):
    147 """ 148 Property target used to get the capacity value. 149 """ 150 return self._capacity
    151 152 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 153 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 154 capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.")
    155
    156 157 ######################################################################## 158 # MediaCapacity class definition 159 ######################################################################## 160 161 -class MediaCapacity(object):
    162 163 """ 164 Class encapsulating information about DVD media capacity. 165 166 Space used and space available do not include any information about media 167 lead-in or other overhead. 168 169 @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized 170 """ 171
    172 - def __init__(self, bytesUsed, bytesAvailable):
    173 """ 174 Initializes a capacity object. 175 @raise ValueError: If the bytes used and available values are not floats. 176 """ 177 self._bytesUsed = float(bytesUsed) 178 self._bytesAvailable = float(bytesAvailable)
    179
    180 - def __str__(self):
    181 """ 182 Informal string representation for class instance. 183 """ 184 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    185
    186 - def _getBytesUsed(self):
    187 """ 188 Property target used to get the bytes-used value. 189 """ 190 return self._bytesUsed
    191
    192 - def _getBytesAvailable(self):
    193 """ 194 Property target available to get the bytes-available value. 195 """ 196 return self._bytesAvailable
    197
    198 - def _getTotalCapacity(self):
    199 """ 200 Property target to get the total capacity (used + available). 201 """ 202 return self.bytesUsed + self.bytesAvailable
    203
    204 - def _getUtilized(self):
    205 """ 206 Property target to get the percent of capacity which is utilized. 207 """ 208 if self.bytesAvailable <= 0.0: 209 return 100.0 210 elif self.bytesUsed <= 0.0: 211 return 0.0 212 return (self.bytesUsed / self.totalCapacity) * 100.0
    213 214 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 215 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 216 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 217 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    218
    219 220 ######################################################################## 221 # _ImageProperties class definition 222 ######################################################################## 223 224 -class _ImageProperties(object):
    225 """ 226 Simple value object to hold image properties for C{DvdWriter}. 227 """
    228 - def __init__(self):
    229 self.newDisc = False 230 self.tmpdir = None 231 self.mediaLabel = None 232 self.entries = None # dict mapping path to graft point
    233
    234 235 ######################################################################## 236 # DvdWriter class definition 237 ######################################################################## 238 239 -class DvdWriter(object):
    240 241 ###################### 242 # Class documentation 243 ###################### 244 245 """ 246 Class representing a device that knows how to write some kinds of DVD media. 247 248 Summary 249 ======= 250 251 This is a class representing a device that knows how to write some kinds 252 of DVD media. It provides common operations for the device, such as 253 ejecting the media and writing data to the media. 254 255 This class is implemented in terms of the C{eject} and C{growisofs} 256 utilities, all of which should be available on most UN*X platforms. 257 258 Image Writer Interface 259 ====================== 260 261 The following methods make up the "image writer" interface shared 262 with other kinds of writers:: 263 264 __init__ 265 initializeImage() 266 addImageEntry() 267 writeImage() 268 setImageNewDisc() 269 retrieveCapacity() 270 getEstimatedImageSize() 271 272 Only these methods will be used by other Cedar Backup functionality 273 that expects a compatible image writer. 274 275 The media attribute is also assumed to be available. 276 277 Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of 278 filesystem devices, not SCSI devices. So, although the constructor 279 interface accepts a SCSI device parameter for the sake of compatibility, 280 it's not used. 281 282 Media Types 283 =========== 284 285 This class knows how to write to DVD+R and DVD+RW media, represented 286 by the following constants: 287 288 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 289 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 290 291 The difference is that DVD+RW media can be rewritten, while DVD+R media 292 cannot be (although at present, C{DvdWriter} does not really 293 differentiate between rewritable and non-rewritable media). 294 295 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 296 of 1024*1024*1024 bytes per gigabyte. 297 298 The underlying C{growisofs} utility does support other kinds of media 299 (including DVD-R, DVD-RW and BlueRay) which work somewhat differently 300 than standard DVD+R and DVD+RW media. I don't support these other kinds 301 of media because I haven't had any opportunity to work with them. The 302 same goes for dual-layer media of any type. 303 304 Device Attributes vs. Media Attributes 305 ====================================== 306 307 As with the cdwriter functionality, a given dvdwriter instance has two 308 different kinds of attributes associated with it. I call these device 309 attributes and media attributes. 310 311 Device attributes are things which can be determined without looking at 312 the media. Media attributes are attributes which vary depending on the 313 state of the media. In general, device attributes are available via 314 instance variables and are constant over the life of an object, while 315 media attributes can be retrieved through method calls. 316 317 Compared to cdwriters, dvdwriters have very few attributes. This is due 318 to differences between the way C{growisofs} works relative to 319 C{cdrecord}. 320 321 Media Capacity 322 ============== 323 324 One major difference between the C{cdrecord}/C{mkisofs} utilities used by 325 the cdwriter class and the C{growisofs} utility used here is that the 326 process of estimating remaining capacity and image size is more 327 straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. 328 329 In this class, remaining capacity is calculated by asking doing a dry run 330 of C{growisofs} and grabbing some information from the output of that 331 command. Image size is estimated by asking the C{IsoImage} class for an 332 estimate and then adding on a "fudge factor" determined through 333 experimentation. 334 335 Testing 336 ======= 337 338 It's rather difficult to test this code in an automated fashion, even if 339 you have access to a physical DVD writer drive. It's even more difficult 340 to test it if you are running on some build daemon (think of a Debian 341 autobuilder) which can't be expected to have any hardware or any media 342 that you could write to. 343 344 Because of this, some of the implementation below is in terms of static 345 methods that are supposed to take defined actions based on their 346 arguments. Public methods are then implemented in terms of a series of 347 calls to simplistic static methods. This way, we can test as much as 348 possible of the "difficult" functionality via testing the static methods, 349 while hoping that if the static methods are called appropriately, things 350 will work properly. It's not perfect, but it's much better than no 351 testing at all. 352 353 @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, 354 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, 355 _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, 356 device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject 357 """ 358 359 ############## 360 # Constructor 361 ############## 362
    363 - def __init__(self, device, scsiId=None, driveSpeed=None, 364 mediaType=MEDIA_DVDPLUSRW, noEject=False, 365 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    366 """ 367 Initializes a DVD writer object. 368 369 Since C{growisofs} can only address devices using the device path (i.e. 370 C{/dev/dvd}), the hardware id will always be set based on the device. If 371 passed in, it will be saved for reference purposes only. 372 373 We have no way to query the device to ask whether it has a tray or can be 374 safely opened and closed. So, the C{noEject} flag is used to set these 375 values. If C{noEject=False}, then we assume a tray exists and open/close 376 is safe. If C{noEject=True}, then we assume that there is no tray and 377 open/close is not safe. 378 379 @note: The C{unittest} parameter should never be set to C{True} 380 outside of Cedar Backup code. It is intended for use in unit testing 381 Cedar Backup internals and has no other sensible purpose. 382 383 @param device: Filesystem device associated with this writer. 384 @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} 385 386 @param scsiId: SCSI id for the device (optional, for reference only). 387 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 388 389 @param driveSpeed: Speed at which the drive writes. 390 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 391 392 @param mediaType: Type of the media that is assumed to be in the drive. 393 @type mediaType: One of the valid media type as discussed above. 394 395 @param noEject: Tells Cedar Backup that the device cannot safely be ejected 396 @type noEject: Boolean true/false 397 398 @param refreshMediaDelay: Refresh media delay to use, if any 399 @type refreshMediaDelay: Number of seconds, an integer >= 0 400 401 @param ejectDelay: Eject delay to use, if any 402 @type ejectDelay: Number of seconds, an integer >= 0 403 404 @param unittest: Turns off certain validations, for use in unit testing. 405 @type unittest: Boolean true/false 406 407 @raise ValueError: If the device is not valid for some reason. 408 @raise ValueError: If the SCSI id is not in a valid form. 409 @raise ValueError: If the drive speed is not an integer >= 1. 410 """ 411 if scsiId is not None: 412 logger.warn("SCSI id [%s] will be ignored by DvdWriter." % scsiId) 413 self._image = None # optionally filled in by initializeImage() 414 self._device = validateDevice(device, unittest) 415 self._scsiId = scsiId # not validated, because it's just for reference 416 self._driveSpeed = validateDriveSpeed(driveSpeed) 417 self._media = MediaDefinition(mediaType) 418 self._refreshMediaDelay = refreshMediaDelay 419 self._ejectDelay = ejectDelay 420 if noEject: 421 self._deviceHasTray = False 422 self._deviceCanEject = False 423 else: 424 self._deviceHasTray = True # just assume 425 self._deviceCanEject = True # just assume
    426 427 428 ############# 429 # Properties 430 ############# 431
    432 - def _getDevice(self):
    433 """ 434 Property target used to get the device value. 435 """ 436 return self._device
    437
    438 - def _getScsiId(self):
    439 """ 440 Property target used to get the SCSI id value. 441 """ 442 return self._scsiId
    443
    444 - def _getHardwareId(self):
    445 """ 446 Property target used to get the hardware id value. 447 """ 448 return self._device
    449
    450 - def _getDriveSpeed(self):
    451 """ 452 Property target used to get the drive speed. 453 """ 454 return self._driveSpeed
    455
    456 - def _getMedia(self):
    457 """ 458 Property target used to get the media description. 459 """ 460 return self._media
    461
    462 - def _getDeviceHasTray(self):
    463 """ 464 Property target used to get the device-has-tray flag. 465 """ 466 return self._deviceHasTray
    467
    468 - def _getDeviceCanEject(self):
    469 """ 470 Property target used to get the device-can-eject flag. 471 """ 472 return self._deviceCanEject
    473
    474 - def _getRefreshMediaDelay(self):
    475 """ 476 Property target used to get the configured refresh media delay, in seconds. 477 """ 478 return self._refreshMediaDelay
    479
    480 - def _getEjectDelay(self):
    481 """ 482 Property target used to get the configured eject delay, in seconds. 483 """ 484 return self._ejectDelay
    485 486 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 487 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") 488 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") 489 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 490 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 491 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 492 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 493 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 494 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 495 496 497 ################################################# 498 # Methods related to device and media attributes 499 ################################################# 500
    501 - def isRewritable(self):
    502 """Indicates whether the media is rewritable per configuration.""" 503 return self._media.rewritable
    504
    505 - def retrieveCapacity(self, entireDisc=False):
    506 """ 507 Retrieves capacity for the current media in terms of a C{MediaCapacity} 508 object. 509 510 If C{entireDisc} is passed in as C{True}, the capacity will be for the 511 entire disc, as if it were to be rewritten from scratch. The same will 512 happen if the disc can't be read for some reason. Otherwise, the capacity 513 will be calculated by subtracting the sectors currently used on the disc, 514 as reported by C{growisofs} itself. 515 516 @param entireDisc: Indicates whether to return capacity for entire disc. 517 @type entireDisc: Boolean true/false 518 519 @return: C{MediaCapacity} object describing the capacity of the media. 520 521 @raise ValueError: If there is a problem parsing the C{growisofs} output 522 @raise IOError: If the media could not be read for some reason. 523 """ 524 sectorsUsed = 0 525 if not entireDisc: 526 sectorsUsed = self._retrieveSectorsUsed() 527 sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors 528 bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) 529 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 530 return MediaCapacity(bytesUsed, bytesAvailable)
    531 532 533 ####################################################### 534 # Methods used for working with the internal ISO image 535 ####################################################### 536
    537 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    538 """ 539 Initializes the writer's associated ISO image. 540 541 This method initializes the C{image} instance variable so that the caller 542 can use the C{addImageEntry} method. Once entries have been added, the 543 C{writeImage} method can be called with no arguments. 544 545 @param newDisc: Indicates whether the disc should be re-initialized 546 @type newDisc: Boolean true/false 547 548 @param tmpdir: Temporary directory to use if needed 549 @type tmpdir: String representing a directory path on disk 550 551 @param mediaLabel: Media label to be applied to the image, if any 552 @type mediaLabel: String, no more than 25 characters long 553 """ 554 self._image = _ImageProperties() 555 self._image.newDisc = newDisc 556 self._image.tmpdir = encodePath(tmpdir) 557 self._image.mediaLabel = mediaLabel 558 self._image.entries = {} # mapping from path to graft point (if any)
    559
    560 - def addImageEntry(self, path, graftPoint):
    561 """ 562 Adds a filepath entry to the writer's associated ISO image. 563 564 The contents of the filepath -- but not the path itself -- will be added 565 to the image at the indicated graft point. If you don't want to use a 566 graft point, just pass C{None}. 567 568 @note: Before calling this method, you must call L{initializeImage}. 569 570 @param path: File or directory to be added to the image 571 @type path: String representing a path on disk 572 573 @param graftPoint: Graft point to be used when adding this entry 574 @type graftPoint: String representing a graft point path, as described above 575 576 @raise ValueError: If initializeImage() was not previously called 577 @raise ValueError: If the path is not a valid file or directory 578 """ 579 if self._image is None: 580 raise ValueError("Must call initializeImage() before using this method.") 581 if not os.path.exists(path): 582 raise ValueError("Path [%s] does not exist." % path) 583 self._image.entries[path] = graftPoint
    584
    585 - def setImageNewDisc(self, newDisc):
    586 """ 587 Resets (overrides) the newDisc flag on the internal image. 588 @param newDisc: New disc flag to set 589 @raise ValueError: If initializeImage() was not previously called 590 """ 591 if self._image is None: 592 raise ValueError("Must call initializeImage() before using this method.") 593 self._image.newDisc = newDisc
    594
    595 - def getEstimatedImageSize(self):
    596 """ 597 Gets the estimated size of the image associated with the writer. 598 599 This is an estimate and is conservative. The actual image could be as 600 much as 450 blocks (sectors) smaller under some circmstances. 601 602 @return: Estimated size of the image, in bytes. 603 604 @raise IOError: If there is a problem calling C{mkisofs}. 605 @raise ValueError: If initializeImage() was not previously called 606 """ 607 if self._image is None: 608 raise ValueError("Must call initializeImage() before using this method.") 609 return DvdWriter._getEstimatedImageSize(self._image.entries)
    610 611 612 ###################################### 613 # Methods which expose device actions 614 ###################################### 615
    616 - def openTray(self):
    617 """ 618 Opens the device's tray and leaves it open. 619 620 This only works if the device has a tray and supports ejecting its media. 621 We have no way to know if the tray is currently open or closed, so we 622 just send the appropriate command and hope for the best. If the device 623 does not have a tray or does not support ejecting its media, then we do 624 nothing. 625 626 Starting with Debian wheezy on my backup hardware, I started seeing 627 consistent problems with the eject command. I couldn't tell whether 628 these problems were due to the device management system or to the new 629 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 630 I was opening and closing the tray too quickly. I worked around that 631 behavior with the new ejectDelay flag. 632 633 Later, I sometimes ran into issues after writing an image to a disc: 634 eject would give errors like "unable to eject, last error: Inappropriate 635 ioctl for device". Various sources online (like Ubuntu bug #875543) 636 suggested that the drive was being locked somehow, and that the 637 workaround was to run 'eject -i off' to unlock it. Sure enough, that 638 fixed the problem for me, so now it's a normal error-handling strategy. 639 640 @raise IOError: If there is an error talking to the device. 641 """ 642 if self._deviceHasTray and self._deviceCanEject: 643 command = resolveCommand(EJECT_COMMAND) 644 args = [ self.device, ] 645 result = executeCommand(command, args)[0] 646 if result != 0: 647 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 648 self.unlockTray() 649 result = executeCommand(command, args)[0] 650 if result != 0: 651 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 652 logger.debug("Kludge was apparently successful.") 653 if self.ejectDelay is not None: 654 logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay) 655 time.sleep(self.ejectDelay)
    656
    657 - def unlockTray(self):
    658 """ 659 Unlocks the device's tray via 'eject -i off'. 660 @raise IOError: If there is an error talking to the device. 661 """ 662 command = resolveCommand(EJECT_COMMAND) 663 args = [ "-i", "off", self.device, ] 664 result = executeCommand(command, args)[0] 665 if result != 0: 666 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    667
    668 - def closeTray(self):
    669 """ 670 Closes the device's tray. 671 672 This only works if the device has a tray and supports ejecting its media. 673 We have no way to know if the tray is currently open or closed, so we 674 just send the appropriate command and hope for the best. If the device 675 does not have a tray or does not support ejecting its media, then we do 676 nothing. 677 678 @raise IOError: If there is an error talking to the device. 679 """ 680 if self._deviceHasTray and self._deviceCanEject: 681 command = resolveCommand(EJECT_COMMAND) 682 args = [ "-t", self.device, ] 683 result = executeCommand(command, args)[0] 684 if result != 0: 685 raise IOError("Error (%d) executing eject command to close tray." % result)
    686
    687 - def refreshMedia(self):
    688 """ 689 Opens and then immediately closes the device's tray, to refresh the 690 device's idea of the media. 691 692 Sometimes, a device gets confused about the state of its media. Often, 693 all it takes to solve the problem is to eject the media and then 694 immediately reload it. (There are also configurable eject and refresh 695 media delays which can be applied, for situations where this makes a 696 difference.) 697 698 This only works if the device has a tray and supports ejecting its media. 699 We have no way to know if the tray is currently open or closed, so we 700 just send the appropriate command and hope for the best. If the device 701 does not have a tray or does not support ejecting its media, then we do 702 nothing. The configured delays still apply, though. 703 704 @raise IOError: If there is an error talking to the device. 705 """ 706 self.openTray() 707 self.closeTray() 708 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 709 if self.refreshMediaDelay is not None: 710 logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay) 711 time.sleep(self.refreshMediaDelay) 712 logger.debug("Media refresh complete; hopefully media state is stable now.")
    713
    714 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    715 """ 716 Writes an ISO image to the media in the device. 717 718 If C{newDisc} is passed in as C{True}, we assume that the entire disc 719 will be re-created from scratch. Note that unlike C{CdWriter}, 720 C{DvdWriter} does not blank rewritable media before reusing it; however, 721 C{growisofs} is called such that the media will be re-initialized as 722 needed. 723 724 If C{imagePath} is passed in as C{None}, then the existing image 725 configured with C{initializeImage()} will be used. Under these 726 circumstances, the passed-in C{newDisc} flag will be ignored and the 727 value passed in to C{initializeImage()} will apply instead. 728 729 The C{writeMulti} argument is ignored. It exists for compatibility with 730 the Cedar Backup image writer interface. 731 732 @note: The image size indicated in the log ("Image size will be...") is 733 an estimate. The estimate is conservative and is probably larger than 734 the actual space that C{dvdwriter} will use. 735 736 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 737 @type imagePath: String representing a path on disk 738 739 @param newDisc: Indicates whether the disc should be re-initialized 740 @type newDisc: Boolean true/false. 741 742 @param writeMulti: Unused 743 @type writeMulti: Boolean true/false 744 745 @raise ValueError: If the image path is not absolute. 746 @raise ValueError: If some path cannot be encoded properly. 747 @raise IOError: If the media could not be written to for some reason. 748 @raise ValueError: If no image is passed in and initializeImage() was not previously called 749 """ 750 if not writeMulti: 751 logger.warn("writeMulti value of [%s] ignored." % writeMulti) 752 if imagePath is None: 753 if self._image is None: 754 raise ValueError("Must call initializeImage() before using this method with no image path.") 755 size = self.getEstimatedImageSize() 756 logger.info("Image size will be %s (estimated)." % displayBytes(size)) 757 available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable 758 if size > available: 759 logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) 760 raise IOError("Media does not contain enough capacity to store image.") 761 self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) 762 else: 763 if not os.path.isabs(imagePath): 764 raise ValueError("Image path must be absolute.") 765 imagePath = encodePath(imagePath) 766 self._writeImage(newDisc, imagePath, None)
    767 768 769 ################################################################## 770 # Utility methods for dealing with growisofs and dvd+rw-mediainfo 771 ################################################################## 772
    773 - def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None):
    774 """ 775 Writes an image to disc using either an entries list or an ISO image on 776 disk. 777 778 Callers are assumed to have done validation on paths, etc. before calling 779 this method. 780 781 @param newDisc: Indicates whether the disc should be re-initialized 782 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 783 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 784 785 @raise IOError: If the media could not be written to for some reason. 786 """ 787 command = resolveCommand(GROWISOFS_COMMAND) 788 args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) 789 (result, output) = executeCommand(command, args, returnOutput=True) 790 if result != 0: 791 DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found 792 raise IOError("Error (%d) executing command to write disc." % result) 793 self.refreshMedia()
    794 795 @staticmethod
    796 - def _getEstimatedImageSize(entries):
    797 """ 798 Gets the estimated size of a set of image entries. 799 800 This is implemented in terms of the C{IsoImage} class. The returned 801 value is calculated by adding a "fudge factor" to the value from 802 C{IsoImage}. This fudge factor was determined by experimentation and is 803 conservative -- the actual image could be as much as 450 blocks smaller 804 under some circumstances. 805 806 @param entries: Dictionary mapping path to graft point. 807 808 @return: Total estimated size of image, in bytes. 809 810 @raise ValueError: If there are no entries in the dictionary 811 @raise ValueError: If any path in the dictionary does not exist 812 @raise IOError: If there is a problem calling C{mkisofs}. 813 """ 814 fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation 815 if len(entries.keys()) == 0: 816 raise ValueError("Must add at least one entry with addImageEntry().") 817 image = IsoImage() 818 for path in entries.keys(): 819 image.addEntry(path, entries[path], override=False, contentsOnly=True) 820 estimatedSize = image.getEstimatedSize() + fudgeFactor 821 return estimatedSize
    822
    823 - def _retrieveSectorsUsed(self):
    824 """ 825 Retrieves the number of sectors used on the current media. 826 827 This is a little ugly. We need to call growisofs in "dry-run" mode and 828 parse some information from its output. However, to do that, we need to 829 create a dummy file that we can pass to the command -- and we have to 830 make sure to remove it later. 831 832 Once growisofs has been run, then we call C{_parseSectorsUsed} to parse 833 the output and calculate the number of sectors used on the media. 834 835 @return: Number of sectors used on the media 836 """ 837 tempdir = tempfile.mkdtemp() 838 try: 839 entries = { tempdir: None } 840 args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) 841 command = resolveCommand(GROWISOFS_COMMAND) 842 (result, output) = executeCommand(command, args, returnOutput=True) 843 if result != 0: 844 logger.debug("Error (%d) calling growisofs to read sectors used." % result) 845 logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") 846 return 0.0 847 sectorsUsed = DvdWriter._parseSectorsUsed(output) 848 logger.debug("Determined sectors used as %s" % sectorsUsed) 849 return sectorsUsed 850 finally: 851 if os.path.exists(tempdir): 852 try: 853 os.rmdir(tempdir) 854 except: pass
    855 856 @staticmethod
    857 - def _parseSectorsUsed(output):
    858 """ 859 Parse sectors used information out of C{growisofs} output. 860 861 The first line of a growisofs run looks something like this:: 862 863 Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' 864 865 Dmitry has determined that the seek value in this line gives us 866 information about how much data has previously been written to the media. 867 That value multiplied by 16 yields the number of sectors used. 868 869 If the seek line cannot be found in the output, then sectors used of zero 870 is assumed. 871 872 @return: Sectors used on the media, as a floating point number. 873 874 @raise ValueError: If the output cannot be parsed properly. 875 """ 876 if output is not None: 877 pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") 878 for line in output: 879 match = pattern.search(line) 880 if match is not None: 881 try: 882 return float(match.group(4).strip()) * 16.0 883 except ValueError: 884 raise ValueError("Unable to parse sectors used out of growisofs output.") 885 logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") 886 return 0.0
    887 888 @staticmethod
    889 - def _searchForOverburn(output):
    890 """ 891 Search for an "overburn" error message in C{growisofs} output. 892 893 The C{growisofs} command returns a non-zero exit code and puts a message 894 into the output -- even on a dry run -- if there is not enough space on 895 the media. This is called an "overburn" condition. 896 897 The error message looks like this:: 898 899 :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! 900 901 This method looks for the overburn error message anywhere in the output. 902 If a matching error message is found, an C{IOError} exception is raised 903 containing relevant information about the problem. Otherwise, the method 904 call returns normally. 905 906 @param output: List of output lines to search, as from C{executeCommand} 907 908 @raise IOError: If an overburn condition is found. 909 """ 910 if output is None: 911 return 912 pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") 913 for line in output: 914 match = pattern.search(line) 915 if match is not None: 916 try: 917 available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) 918 size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) 919 logger.error("Image [%s] does not fit in available capacity [%s]." % (displayBytes(size), displayBytes(available))) 920 except ValueError: 921 logger.error("Image does not fit in available capacity (no useful capacity info available).") 922 raise IOError("Media does not contain enough capacity to store image.")
    923 924 @staticmethod
    925 - def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False):
    926 """ 927 Builds a list of arguments to be passed to a C{growisofs} command. 928 929 The arguments will either cause C{growisofs} to write the indicated image 930 file to disc, or will pass C{growisofs} a list of directories or files 931 that should be written to disc. 932 933 If a new image is created, it will always be created with Rock Ridge 934 extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is 935 not C{None}. 936 937 @param newDisc: Indicates whether the disc should be re-initialized 938 @param hardwareId: Hardware id for the device 939 @param driveSpeed: Speed at which the drive writes. 940 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 941 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 942 @param mediaLabel: Media label to set on the image, if any 943 @param dryRun: Says whether to make this a dry run (for checking capacity) 944 945 @note: If we write an existing image to disc, then the mediaLabel is 946 ignored. The media label is an attribute of the image, and should be set 947 on the image when it is created. 948 949 @note: We always pass the undocumented option C{-use-the-force-like=tty} 950 to growisofs. Without this option, growisofs will refuse to execute 951 certain actions when running from cron. A good example is -Z, which 952 happily overwrites an existing DVD from the command-line, but fails when 953 run from cron. It took a while to figure that out, since it worked every 954 time I tested it by hand. :( 955 956 @return: List suitable for passing to L{util.executeCommand} as C{args}. 957 958 @raise ValueError: If caller does not pass one or the other of imagePath or entries. 959 """ 960 args = [] 961 if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): 962 raise ValueError("Must use either imagePath or entries.") 963 args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron 964 if dryRun: 965 args.append("-dry-run") 966 if driveSpeed is not None: 967 args.append("-speed=%d" % driveSpeed) 968 if newDisc: 969 args.append("-Z") 970 else: 971 args.append("-M") 972 if imagePath is not None: 973 args.append("%s=%s" % (hardwareId, imagePath)) 974 else: 975 args.append(hardwareId) 976 if mediaLabel is not None: 977 args.append("-V") 978 args.append(mediaLabel) 979 args.append("-r") # Rock Ridge extensions with sane ownership and permissions 980 args.append("-graft-points") 981 keys = entries.keys() 982 keys.sort() # just so we get consistent results 983 for key in keys: 984 # Same syntax as when calling mkisofs in IsoImage 985 if entries[key] is None: 986 args.append(key) 987 else: 988 args.append("%s/=%s" % (entries[key].strip("/"), key)) 989 return args
    990

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.purge-module.html0000664000175000017500000002315212143054362027434 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.purge
    Package CedarBackup2 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Module purge

    source code

    Implements the standard 'purge' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executePurge(configPath, options, config)
    Executes the purge backup action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.purge")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executePurge(configPath, options, config)

    source code 

    Executes the purge backup action.

    For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.RemotePeer-class.html0000664000175000017500000015016412143054362030012 0ustar pronovicpronovic00000000000000 CedarBackup2.config.RemotePeer
    Package CedarBackup2 :: Module config :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The remote user must be a non-empty string.
    • The rcp command must be a non-empty string.
    • The rsh command must be a non-empty string.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    Constructor for the RemotePeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, must be a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      remoteUser
    Name of backup user on remote peer.
      rcpCommand
    Overridden rcp-compatible copy command for peer.
      rshCommand
    Overridden rsh-compatible remote shell command for peer.
      cbackCommand
    Overridden cback-compatible command to use on remote peer.
      managed
    Indicates whether this is a managed peer.
      managedActions
    Overridden set of actions that are managed on the peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the RemotePeer class.

    Parameters:
    • name - Name of the peer, must be a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • remoteUser - Name of backup user on remote peer.
    • rcpCommand - Overridden rcp-compatible copy command for peer.
    • rshCommand - Overridden rsh-compatible remote shell command for peer.
    • cbackCommand - Overridden cback-compatible command to use on remote peer.
    • managed - Indicates whether this is a managed peer.
    • managedActions - Overridden set of actions that are managed on the peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, must be a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of backup user on remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    Overridden rcp-compatible copy command for peer.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Overridden rsh-compatible remote shell command for peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    cbackCommand

    Overridden cback-compatible command to use on remote peer.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    managed

    Indicates whether this is a managed peer.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedActions

    Overridden set of actions that are managed on the peer.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.capacity-pysrc.html0000664000175000017500000054553512143054365027632 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity
    Package CedarBackup2 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.capacity

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: capacity.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Provides an extension to check remaining media capacity. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides an extension to check remaining media capacity. 
     41   
     42  Some users have asked for advance warning that their media is beginning to fill 
     43  up.  This is an extension that checks the current capacity of the media in the 
     44  writer, and prints a warning if the media is more than X% full, or has fewer 
     45  than X bytes of capacity remaining. 
     46   
     47  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     48  """ 
     49   
     50  ######################################################################## 
     51  # Imported modules 
     52  ######################################################################## 
     53   
     54  # System modules 
     55  import logging 
     56   
     57  # Cedar Backup modules 
     58  from CedarBackup2.util import displayBytes 
     59  from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     60  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     61  from CedarBackup2.xmlutil import readFirstChild, readString 
     62  from CedarBackup2.actions.util import createWriter, checkMediaState 
     63   
     64   
     65  ######################################################################## 
     66  # Module-wide constants and variables 
     67  ######################################################################## 
     68   
     69  logger = logging.getLogger("CedarBackup2.log.extend.capacity") 
    
    70 71 72 ######################################################################## 73 # Percentage class definition 74 ######################################################################## 75 76 -class PercentageQuantity(object):
    77 78 """ 79 Class representing a percentage quantity. 80 81 The percentage is maintained internally as a string so that issues of 82 precision can be avoided. It really isn't possible to store a floating 83 point number here while being able to losslessly translate back and forth 84 between XML and object representations. (Perhaps the Python 2.4 Decimal 85 class would have been an option, but I originally wanted to stay compatible 86 with Python 2.3.) 87 88 Even though the quantity is maintained as a string, the string must be in a 89 valid floating point positive number. Technically, any floating point 90 string format supported by Python is allowble. However, it does not make 91 sense to have a negative percentage in this context. 92 93 @sort: __init__, __repr__, __str__, __cmp__, quantity 94 """ 95
    96 - def __init__(self, quantity=None):
    97 """ 98 Constructor for the C{PercentageQuantity} class. 99 @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") 100 @raise ValueError: If the quantity value is invaid. 101 """ 102 self._quantity = None 103 self.quantity = quantity
    104
    105 - def __repr__(self):
    106 """ 107 Official string representation for class instance. 108 """ 109 return "PercentageQuantity(%s)" % (self.quantity)
    110
    111 - def __str__(self):
    112 """ 113 Informal string representation for class instance. 114 """ 115 return self.__repr__()
    116
    117 - def __cmp__(self, other):
    118 """ 119 Definition of equals operator for this class. 120 Lists within this class are "unordered" for equality comparisons. 121 @param other: Other object to compare to. 122 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 123 """ 124 if other is None: 125 return 1 126 if self.quantity != other.quantity: 127 if self.quantity < other.quantity: 128 return -1 129 else: 130 return 1 131 return 0
    132
    133 - def _setQuantity(self, value):
    134 """ 135 Property target used to set the quantity 136 The value must be a non-empty string if it is not C{None}. 137 @raise ValueError: If the value is an empty string. 138 @raise ValueError: If the value is not a valid floating point number 139 @raise ValueError: If the value is less than zero 140 """ 141 if value is not None: 142 if len(value) < 1: 143 raise ValueError("Percentage must be a non-empty string.") 144 floatValue = float(value) 145 if floatValue < 0.0 or floatValue > 100.0: 146 raise ValueError("Percentage must be a positive value from 0.0 to 100.0") 147 self._quantity = value # keep around string
    148
    149 - def _getQuantity(self):
    150 """ 151 Property target used to get the quantity. 152 """ 153 return self._quantity
    154
    155 - def _getPercentage(self):
    156 """ 157 Property target used to get the quantity as a floating point number. 158 If there is no quantity set, then a value of 0.0 is returned. 159 """ 160 if self.quantity is not None: 161 return float(self.quantity) 162 return 0.0
    163 164 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") 165 percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.")
    166
    167 168 ######################################################################## 169 # CapacityConfig class definition 170 ######################################################################## 171 172 -class CapacityConfig(object):
    173 174 """ 175 Class representing capacity configuration. 176 177 The following restrictions exist on data in this class: 178 179 - The maximum percentage utilized must be a PercentageQuantity 180 - The minimum bytes remaining must be a ByteQuantity 181 182 @sort: __init__, __repr__, __str__, __cmp__, maxPercentage, minBytes 183 """ 184
    185 - def __init__(self, maxPercentage=None, minBytes=None):
    186 """ 187 Constructor for the C{CapacityConfig} class. 188 189 @param maxPercentage: Maximum percentage of the media that may be utilized 190 @param minBytes: Minimum number of free bytes that must be available 191 """ 192 self._maxPercentage = None 193 self._minBytes = None 194 self.maxPercentage = maxPercentage 195 self.minBytes = minBytes
    196
    197 - def __repr__(self):
    198 """ 199 Official string representation for class instance. 200 """ 201 return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes)
    202
    203 - def __str__(self):
    204 """ 205 Informal string representation for class instance. 206 """ 207 return self.__repr__()
    208
    209 - def __cmp__(self, other):
    210 """ 211 Definition of equals operator for this class. 212 @param other: Other object to compare to. 213 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 214 """ 215 if other is None: 216 return 1 217 if self.maxPercentage != other.maxPercentage: 218 if self.maxPercentage < other.maxPercentage: 219 return -1 220 else: 221 return 1 222 if self.minBytes != other.minBytes: 223 if self.minBytes < other.minBytes: 224 return -1 225 else: 226 return 1 227 return 0
    228
    229 - def _setMaxPercentage(self, value):
    230 """ 231 Property target used to set the maxPercentage value. 232 If not C{None}, the value must be a C{PercentageQuantity} object. 233 @raise ValueError: If the value is not a C{PercentageQuantity} 234 """ 235 if value is None: 236 self._maxPercentage = None 237 else: 238 if not isinstance(value, PercentageQuantity): 239 raise ValueError("Value must be a C{PercentageQuantity} object.") 240 self._maxPercentage = value
    241
    242 - def _getMaxPercentage(self):
    243 """ 244 Property target used to get the maxPercentage value 245 """ 246 return self._maxPercentage
    247
    248 - def _setMinBytes(self, value):
    249 """ 250 Property target used to set the bytes utilized value. 251 If not C{None}, the value must be a C{ByteQuantity} object. 252 @raise ValueError: If the value is not a C{ByteQuantity} 253 """ 254 if value is None: 255 self._minBytes = None 256 else: 257 if not isinstance(value, ByteQuantity): 258 raise ValueError("Value must be a C{ByteQuantity} object.") 259 self._minBytes = value
    260
    261 - def _getMinBytes(self):
    262 """ 263 Property target used to get the bytes remaining value. 264 """ 265 return self._minBytes
    266 267 maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") 268 minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.")
    269
    270 271 ######################################################################## 272 # LocalConfig class definition 273 ######################################################################## 274 275 -class LocalConfig(object):
    276 277 """ 278 Class representing this extension's configuration document. 279 280 This is not a general-purpose configuration object like the main Cedar 281 Backup configuration object. Instead, it just knows how to parse and emit 282 specific configuration values to this extension. Third parties who need to 283 read and write configuration related to this extension should access it 284 through the constructor, C{validate} and C{addConfig} methods. 285 286 @note: Lists within this class are "unordered" for equality comparisons. 287 288 @sort: __init__, __repr__, __str__, __cmp__, capacity, validate, addConfig 289 """ 290
    291 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    292 """ 293 Initializes a configuration object. 294 295 If you initialize the object without passing either C{xmlData} or 296 C{xmlPath} then configuration will be empty and will be invalid until it 297 is filled in properly. 298 299 No reference to the original XML data or original path is saved off by 300 this class. Once the data has been parsed (successfully or not) this 301 original information is discarded. 302 303 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 304 method will be called (with its default arguments) against configuration 305 after successfully parsing any passed-in XML. Keep in mind that even if 306 C{validate} is C{False}, it might not be possible to parse the passed-in 307 XML document if lower-level validations fail. 308 309 @note: It is strongly suggested that the C{validate} option always be set 310 to C{True} (the default) unless there is a specific need to read in 311 invalid configuration from disk. 312 313 @param xmlData: XML data representing configuration. 314 @type xmlData: String data. 315 316 @param xmlPath: Path to an XML file on disk. 317 @type xmlPath: Absolute path to a file on disk. 318 319 @param validate: Validate the document after parsing it. 320 @type validate: Boolean true/false. 321 322 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 323 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 324 @raise ValueError: If the parsed configuration document is not valid. 325 """ 326 self._capacity = None 327 self.capacity = None 328 if xmlData is not None and xmlPath is not None: 329 raise ValueError("Use either xmlData or xmlPath, but not both.") 330 if xmlData is not None: 331 self._parseXmlData(xmlData) 332 if validate: 333 self.validate() 334 elif xmlPath is not None: 335 xmlData = open(xmlPath).read() 336 self._parseXmlData(xmlData) 337 if validate: 338 self.validate()
    339
    340 - def __repr__(self):
    341 """ 342 Official string representation for class instance. 343 """ 344 return "LocalConfig(%s)" % (self.capacity)
    345
    346 - def __str__(self):
    347 """ 348 Informal string representation for class instance. 349 """ 350 return self.__repr__()
    351
    352 - def __cmp__(self, other):
    353 """ 354 Definition of equals operator for this class. 355 Lists within this class are "unordered" for equality comparisons. 356 @param other: Other object to compare to. 357 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 358 """ 359 if other is None: 360 return 1 361 if self.capacity != other.capacity: 362 if self.capacity < other.capacity: 363 return -1 364 else: 365 return 1 366 return 0
    367
    368 - def _setCapacity(self, value):
    369 """ 370 Property target used to set the capacity configuration value. 371 If not C{None}, the value must be a C{CapacityConfig} object. 372 @raise ValueError: If the value is not a C{CapacityConfig} 373 """ 374 if value is None: 375 self._capacity = None 376 else: 377 if not isinstance(value, CapacityConfig): 378 raise ValueError("Value must be a C{CapacityConfig} object.") 379 self._capacity = value
    380
    381 - def _getCapacity(self):
    382 """ 383 Property target used to get the capacity configuration value. 384 """ 385 return self._capacity
    386 387 capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") 388
    389 - def validate(self):
    390 """ 391 Validates configuration represented by the object. 392 THere must be either a percentage, or a byte capacity, but not both. 393 @raise ValueError: If one of the validations fails. 394 """ 395 if self.capacity is None: 396 raise ValueError("Capacity section is required.") 397 if self.capacity.maxPercentage is None and self.capacity.minBytes is None: 398 raise ValueError("Must provide either max percentage or min bytes.") 399 if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: 400 raise ValueError("Must provide either max percentage or min bytes, but not both.")
    401
    402 - def addConfig(self, xmlDom, parentNode):
    403 """ 404 Adds a <capacity> configuration section as the next child of a parent. 405 406 Third parties should use this function to write configuration related to 407 this extension. 408 409 We add the following fields to the document:: 410 411 maxPercentage //cb_config/capacity/max_percentage 412 minBytes //cb_config/capacity/min_bytes 413 414 @param xmlDom: DOM tree as from C{impl.createDocument()}. 415 @param parentNode: Parent that the section should be appended to. 416 """ 417 if self.capacity is not None: 418 sectionNode = addContainerNode(xmlDom, parentNode, "capacity") 419 LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) 420 if self.capacity.minBytes is not None: # because utility function fills in empty section on None 421 addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes)
    422
    423 - def _parseXmlData(self, xmlData):
    424 """ 425 Internal method to parse an XML string into the object. 426 427 This method parses the XML document into a DOM tree (C{xmlDom}) and then 428 calls a static method to parse the capacity configuration section. 429 430 @param xmlData: XML data to be parsed 431 @type xmlData: String data 432 433 @raise ValueError: If the XML cannot be successfully parsed. 434 """ 435 (xmlDom, parentNode) = createInputDom(xmlData) 436 self._capacity = LocalConfig._parseCapacity(parentNode)
    437 438 @staticmethod
    439 - def _parseCapacity(parentNode):
    440 """ 441 Parses a capacity configuration section. 442 443 We read the following fields:: 444 445 maxPercentage //cb_config/capacity/max_percentage 446 minBytes //cb_config/capacity/min_bytes 447 448 @param parentNode: Parent node to search beneath. 449 450 @return: C{CapacityConfig} object or C{None} if the section does not exist. 451 @raise ValueError: If some filled-in value is invalid. 452 """ 453 capacity = None 454 section = readFirstChild(parentNode, "capacity") 455 if section is not None: 456 capacity = CapacityConfig() 457 capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") 458 capacity.minBytes = readByteQuantity(section, "min_bytes") 459 return capacity
    460 461 @staticmethod
    462 - def _readPercentageQuantity(parent, name):
    463 """ 464 Read a percentage quantity value from an XML document. 465 @param parent: Parent node to search beneath. 466 @param name: Name of node to search for. 467 @return: Percentage quantity parsed from XML document 468 """ 469 quantity = readString(parent, name) 470 if quantity is None: 471 return None 472 return PercentageQuantity(quantity)
    473 474 @staticmethod
    475 - def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity):
    476 """ 477 Adds a text node as the next child of a parent, to contain a percentage quantity. 478 479 If the C{percentageQuantity} is None, then no node will be created. 480 481 @param xmlDom: DOM tree as from C{impl.createDocument()}. 482 @param parentNode: Parent node to create child for. 483 @param nodeName: Name of the new container node. 484 @param percentageQuantity: PercentageQuantity object to put into the XML document 485 486 @return: Reference to the newly-created node. 487 """ 488 if percentageQuantity is not None: 489 addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity)
    490
    491 492 ######################################################################## 493 # Public functions 494 ######################################################################## 495 496 ########################### 497 # executeAction() function 498 ########################### 499 500 -def executeAction(configPath, options, config):
    501 """ 502 Executes the capacity action. 503 504 @param configPath: Path to configuration file on disk. 505 @type configPath: String representing a path on disk. 506 507 @param options: Program command-line options. 508 @type options: Options object. 509 510 @param config: Program configuration. 511 @type config: Config object. 512 513 @raise ValueError: Under many generic error conditions 514 @raise IOError: If there are I/O problems reading or writing files 515 """ 516 logger.debug("Executing capacity extended action.") 517 if config.options is None or config.store is None: 518 raise ValueError("Cedar Backup configuration is not properly filled in.") 519 local = LocalConfig(xmlPath=configPath) 520 if config.store.checkMedia: 521 checkMediaState(config.store) # raises exception if media is not initialized 522 capacity = createWriter(config).retrieveCapacity() 523 logger.debug("Media capacity: %s" % capacity) 524 if local.capacity.maxPercentage is not None: 525 if capacity.utilized > local.capacity.maxPercentage.percentage: 526 logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized" % 527 (local.capacity.maxPercentage.quantity, capacity.utilized)) 528 else: # if local.capacity.bytes is not None 529 if capacity.bytesAvailable < local.capacity.minBytes.bytes: 530 logger.error("Media has reached capacity limit of %s: only %s available" % 531 (displayBytes(local.capacity.minBytes.bytes), displayBytes(capacity.bytesAvailable))) 532 logger.info("Executed the capacity extended action successfully.")
    533

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.encrypt-pysrc.html0000664000175000017500000051502412143054365027507 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt
    Package CedarBackup2 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.encrypt

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Revision : $Id: encrypt.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Provides an extension to encrypt staging directories. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides an extension to encrypt staging directories. 
     41   
     42  When this extension is executed, all backed-up files in the configured Cedar 
     43  Backup staging directory will be encrypted using gpg.  Any directory which has 
     44  already been encrypted (as indicated by the C{cback.encrypt} file) will be 
     45  ignored. 
     46   
     47  This extension requires a new configuration section <encrypt> and is intended 
     48  to be run immediately after the standard stage action or immediately before the 
     49  standard store action.  Aside from its own configuration, it requires the 
     50  options and staging configuration sections in the standard Cedar Backup 
     51  configuration file. 
     52   
     53  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     54  """ 
     55   
     56  ######################################################################## 
     57  # Imported modules 
     58  ######################################################################## 
     59   
     60  # System modules 
     61  import os 
     62  import logging 
     63   
     64  # Cedar Backup modules 
     65  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     66  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     67  from CedarBackup2.xmlutil import readFirstChild, readString 
     68  from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     69   
     70   
     71  ######################################################################## 
     72  # Module-wide constants and variables 
     73  ######################################################################## 
     74   
     75  logger = logging.getLogger("CedarBackup2.log.extend.encrypt") 
     76   
     77  GPG_COMMAND = [ "gpg", ] 
     78  VALID_ENCRYPT_MODES = [ "gpg", ] 
     79  ENCRYPT_INDICATOR = "cback.encrypt" 
    
    80 81 82 ######################################################################## 83 # EncryptConfig class definition 84 ######################################################################## 85 86 -class EncryptConfig(object):
    87 88 """ 89 Class representing encrypt configuration. 90 91 Encrypt configuration is used for encrypting staging directories. 92 93 The following restrictions exist on data in this class: 94 95 - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} 96 - The encrypt target value must be a non-empty string 97 98 @sort: __init__, __repr__, __str__, __cmp__, encryptMode, encryptTarget 99 """ 100
    101 - def __init__(self, encryptMode=None, encryptTarget=None):
    102 """ 103 Constructor for the C{EncryptConfig} class. 104 105 @param encryptMode: Encryption mode 106 @param encryptTarget: Encryption target (for instance, GPG recipient) 107 108 @raise ValueError: If one of the values is invalid. 109 """ 110 self._encryptMode = None 111 self._encryptTarget = None 112 self.encryptMode = encryptMode 113 self.encryptTarget = encryptTarget
    114
    115 - def __repr__(self):
    116 """ 117 Official string representation for class instance. 118 """ 119 return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget)
    120
    121 - def __str__(self):
    122 """ 123 Informal string representation for class instance. 124 """ 125 return self.__repr__()
    126
    127 - def __cmp__(self, other):
    128 """ 129 Definition of equals operator for this class. 130 Lists within this class are "unordered" for equality comparisons. 131 @param other: Other object to compare to. 132 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 133 """ 134 if other is None: 135 return 1 136 if self.encryptMode != other.encryptMode: 137 if self.encryptMode < other.encryptMode: 138 return -1 139 else: 140 return 1 141 if self.encryptTarget != other.encryptTarget: 142 if self.encryptTarget < other.encryptTarget: 143 return -1 144 else: 145 return 1 146 return 0
    147
    148 - def _setEncryptMode(self, value):
    149 """ 150 Property target used to set the encrypt mode. 151 If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. 152 @raise ValueError: If the value is not valid. 153 """ 154 if value is not None: 155 if value not in VALID_ENCRYPT_MODES: 156 raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) 157 self._encryptMode = value
    158
    159 - def _getEncryptMode(self):
    160 """ 161 Property target used to get the encrypt mode. 162 """ 163 return self._encryptMode
    164
    165 - def _setEncryptTarget(self, value):
    166 """ 167 Property target used to set the encrypt target. 168 """ 169 if value is not None: 170 if len(value) < 1: 171 raise ValueError("Encrypt target must be non-empty string.") 172 self._encryptTarget = value
    173
    174 - def _getEncryptTarget(self):
    175 """ 176 Property target used to get the encrypt target. 177 """ 178 return self._encryptTarget
    179 180 encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") 181 encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).")
    182
    183 184 ######################################################################## 185 # LocalConfig class definition 186 ######################################################################## 187 188 -class LocalConfig(object):
    189 190 """ 191 Class representing this extension's configuration document. 192 193 This is not a general-purpose configuration object like the main Cedar 194 Backup configuration object. Instead, it just knows how to parse and emit 195 encrypt-specific configuration values. Third parties who need to read and 196 write configuration related to this extension should access it through the 197 constructor, C{validate} and C{addConfig} methods. 198 199 @note: Lists within this class are "unordered" for equality comparisons. 200 201 @sort: __init__, __repr__, __str__, __cmp__, encrypt, validate, addConfig 202 """ 203
    204 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    205 """ 206 Initializes a configuration object. 207 208 If you initialize the object without passing either C{xmlData} or 209 C{xmlPath} then configuration will be empty and will be invalid until it 210 is filled in properly. 211 212 No reference to the original XML data or original path is saved off by 213 this class. Once the data has been parsed (successfully or not) this 214 original information is discarded. 215 216 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 217 method will be called (with its default arguments) against configuration 218 after successfully parsing any passed-in XML. Keep in mind that even if 219 C{validate} is C{False}, it might not be possible to parse the passed-in 220 XML document if lower-level validations fail. 221 222 @note: It is strongly suggested that the C{validate} option always be set 223 to C{True} (the default) unless there is a specific need to read in 224 invalid configuration from disk. 225 226 @param xmlData: XML data representing configuration. 227 @type xmlData: String data. 228 229 @param xmlPath: Path to an XML file on disk. 230 @type xmlPath: Absolute path to a file on disk. 231 232 @param validate: Validate the document after parsing it. 233 @type validate: Boolean true/false. 234 235 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 236 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 237 @raise ValueError: If the parsed configuration document is not valid. 238 """ 239 self._encrypt = None 240 self.encrypt = None 241 if xmlData is not None and xmlPath is not None: 242 raise ValueError("Use either xmlData or xmlPath, but not both.") 243 if xmlData is not None: 244 self._parseXmlData(xmlData) 245 if validate: 246 self.validate() 247 elif xmlPath is not None: 248 xmlData = open(xmlPath).read() 249 self._parseXmlData(xmlData) 250 if validate: 251 self.validate()
    252
    253 - def __repr__(self):
    254 """ 255 Official string representation for class instance. 256 """ 257 return "LocalConfig(%s)" % (self.encrypt)
    258
    259 - def __str__(self):
    260 """ 261 Informal string representation for class instance. 262 """ 263 return self.__repr__()
    264
    265 - def __cmp__(self, other):
    266 """ 267 Definition of equals operator for this class. 268 Lists within this class are "unordered" for equality comparisons. 269 @param other: Other object to compare to. 270 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 271 """ 272 if other is None: 273 return 1 274 if self.encrypt != other.encrypt: 275 if self.encrypt < other.encrypt: 276 return -1 277 else: 278 return 1 279 return 0
    280
    281 - def _setEncrypt(self, value):
    282 """ 283 Property target used to set the encrypt configuration value. 284 If not C{None}, the value must be a C{EncryptConfig} object. 285 @raise ValueError: If the value is not a C{EncryptConfig} 286 """ 287 if value is None: 288 self._encrypt = None 289 else: 290 if not isinstance(value, EncryptConfig): 291 raise ValueError("Value must be a C{EncryptConfig} object.") 292 self._encrypt = value
    293
    294 - def _getEncrypt(self):
    295 """ 296 Property target used to get the encrypt configuration value. 297 """ 298 return self._encrypt
    299 300 encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") 301
    302 - def validate(self):
    303 """ 304 Validates configuration represented by the object. 305 306 Encrypt configuration must be filled in. Within that, both the encrypt 307 mode and encrypt target must be filled in. 308 309 @raise ValueError: If one of the validations fails. 310 """ 311 if self.encrypt is None: 312 raise ValueError("Encrypt section is required.") 313 if self.encrypt.encryptMode is None: 314 raise ValueError("Encrypt mode must be set.") 315 if self.encrypt.encryptTarget is None: 316 raise ValueError("Encrypt target must be set.")
    317
    318 - def addConfig(self, xmlDom, parentNode):
    319 """ 320 Adds an <encrypt> configuration section as the next child of a parent. 321 322 Third parties should use this function to write configuration related to 323 this extension. 324 325 We add the following fields to the document:: 326 327 encryptMode //cb_config/encrypt/encrypt_mode 328 encryptTarget //cb_config/encrypt/encrypt_target 329 330 @param xmlDom: DOM tree as from C{impl.createDocument()}. 331 @param parentNode: Parent that the section should be appended to. 332 """ 333 if self.encrypt is not None: 334 sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") 335 addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) 336 addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget)
    337
    338 - def _parseXmlData(self, xmlData):
    339 """ 340 Internal method to parse an XML string into the object. 341 342 This method parses the XML document into a DOM tree (C{xmlDom}) and then 343 calls a static method to parse the encrypt configuration section. 344 345 @param xmlData: XML data to be parsed 346 @type xmlData: String data 347 348 @raise ValueError: If the XML cannot be successfully parsed. 349 """ 350 (xmlDom, parentNode) = createInputDom(xmlData) 351 self._encrypt = LocalConfig._parseEncrypt(parentNode)
    352 353 @staticmethod
    354 - def _parseEncrypt(parent):
    355 """ 356 Parses an encrypt configuration section. 357 358 We read the following individual fields:: 359 360 encryptMode //cb_config/encrypt/encrypt_mode 361 encryptTarget //cb_config/encrypt/encrypt_target 362 363 @param parent: Parent node to search beneath. 364 365 @return: C{EncryptConfig} object or C{None} if the section does not exist. 366 @raise ValueError: If some filled-in value is invalid. 367 """ 368 encrypt = None 369 section = readFirstChild(parent, "encrypt") 370 if section is not None: 371 encrypt = EncryptConfig() 372 encrypt.encryptMode = readString(section, "encrypt_mode") 373 encrypt.encryptTarget = readString(section, "encrypt_target") 374 return encrypt
    375
    376 377 ######################################################################## 378 # Public functions 379 ######################################################################## 380 381 ########################### 382 # executeAction() function 383 ########################### 384 385 -def executeAction(configPath, options, config):
    386 """ 387 Executes the encrypt backup action. 388 389 @param configPath: Path to configuration file on disk. 390 @type configPath: String representing a path on disk. 391 392 @param options: Program command-line options. 393 @type options: Options object. 394 395 @param config: Program configuration. 396 @type config: Config object. 397 398 @raise ValueError: Under many generic error conditions 399 @raise IOError: If there are I/O problems reading or writing files 400 """ 401 logger.debug("Executing encrypt extended action.") 402 if config.options is None or config.stage is None: 403 raise ValueError("Cedar Backup configuration is not properly filled in.") 404 local = LocalConfig(xmlPath=configPath) 405 if local.encrypt.encryptMode not in ["gpg", ]: 406 raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) 407 if local.encrypt.encryptMode == "gpg": 408 _confirmGpgRecipient(local.encrypt.encryptTarget) 409 dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) 410 for dailyDir in dailyDirs: 411 _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, 412 config.options.backupUser, config.options.backupGroup) 413 writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) 414 logger.info("Executed the encrypt extended action successfully.")
    415
    416 417 ############################## 418 # _encryptDailyDir() function 419 ############################## 420 421 -def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup):
    422 """ 423 Encrypts the contents of a daily staging directory. 424 425 Indicator files are ignored. All other files are encrypted. The only valid 426 encrypt mode is C{"gpg"}. 427 428 @param dailyDir: Daily directory to encrypt 429 @param encryptMode: Encryption mode (only "gpg" is allowed) 430 @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) 431 @param backupUser: User that target files should be owned by 432 @param backupGroup: Group that target files should be owned by 433 434 @raise ValueError: If the encrypt mode is not supported. 435 @raise ValueError: If the daily staging directory does not exist. 436 """ 437 logger.debug("Begin encrypting contents of [%s]." % dailyDir) 438 fileList = getBackupFiles(dailyDir) # ignores indicator files 439 for path in fileList: 440 _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) 441 logger.debug("Completed encrypting contents of [%s]." % dailyDir)
    442
    443 444 ########################## 445 # _encryptFile() function 446 ########################## 447 448 -def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False):
    449 """ 450 Encrypts the source file using the indicated mode. 451 452 The encrypted file will be owned by the indicated backup user and group. If 453 C{removeSource} is C{True}, then the source file will be removed after it is 454 successfully encrypted. 455 456 Currently, only the C{"gpg"} encrypt mode is supported. 457 458 @param sourcePath: Absolute path of the source file to encrypt 459 @param encryptMode: Encryption mode (only "gpg" is allowed) 460 @param encryptTarget: Encryption target (GPG recipient) 461 @param backupUser: User that target files should be owned by 462 @param backupGroup: Group that target files should be owned by 463 @param removeSource: Indicates whether to remove the source file 464 465 @return: Path to the newly-created encrypted file. 466 467 @raise ValueError: If an invalid encrypt mode is passed in. 468 @raise IOError: If there is a problem accessing, encrypting or removing the source file. 469 """ 470 if not os.path.exists(sourcePath): 471 raise ValueError("Source path [%s] does not exist." % sourcePath) 472 if encryptMode == 'gpg': 473 encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) 474 else: 475 raise ValueError("Unknown encrypt mode [%s]" % encryptMode) 476 changeOwnership(encryptedPath, backupUser, backupGroup) 477 if removeSource: 478 if os.path.exists(sourcePath): 479 try: 480 os.remove(sourcePath) 481 logger.debug("Completed removing old file [%s]." % sourcePath) 482 except: 483 raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) 484 return encryptedPath
    485
    486 487 ################################# 488 # _encryptFileWithGpg() function 489 ################################# 490 491 -def _encryptFileWithGpg(sourcePath, recipient):
    492 """ 493 Encrypts the indicated source file using GPG. 494 495 The encrypted file will be in GPG's binary output format and will have the 496 same name as the source file plus a C{".gpg"} extension. The source file 497 will not be modified or removed by this function call. 498 499 @param sourcePath: Absolute path of file to be encrypted. 500 @param recipient: Recipient name to be passed to GPG's C{"-r"} option 501 502 @return: Path to the newly-created encrypted file. 503 504 @raise IOError: If there is a problem encrypting the file. 505 """ 506 encryptedPath = "%s.gpg" % sourcePath 507 command = resolveCommand(GPG_COMMAND) 508 args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] 509 result = executeCommand(command, args)[0] 510 if result != 0: 511 raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) 512 if not os.path.exists(encryptedPath): 513 raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) 514 logger.debug("Completed encrypting file [%s] to [%s]." % (sourcePath, encryptedPath)) 515 return encryptedPath
    516
    517 518 ################################# 519 # _confirmGpgRecpient() function 520 ################################# 521 522 -def _confirmGpgRecipient(recipient):
    523 """ 524 Confirms that a recipient's public key is known to GPG. 525 Throws an exception if there is a problem, or returns normally otherwise. 526 @param recipient: Recipient name 527 @raise IOError: If the recipient's public key is not known to GPG. 528 """ 529 command = resolveCommand(GPG_COMMAND) 530 args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed 531 result = executeCommand(command, args)[0] 532 if result != 0: 533 raise IOError("GPG unable to find public key for [%s]." % recipient)
    534

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.testutil-module.html0000664000175000017500000000767612143054362027330 0ustar pronovicpronovic00000000000000 testutil

    Module testutil


    Functions

    availableLocales
    buildPath
    captureOutput
    changeFileAge
    commandAvailable
    extractTar
    failUnlessAssignRaises
    findResources
    getLogin
    getMaskAsMode
    hexFloatLiteralAllowed
    platformCygwin
    platformDebian
    platformHasEcho
    platformMacOsX
    platformRequiresBinaryRead
    platformSupportsLinks
    platformSupportsPermissions
    platformWindows
    randomFilename
    removedir
    runningAsRoot
    setupDebugLogger
    setupOverrides

    Variables

    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter.MediaDefinition-class.html0000664000175000017500000004303712143054363033237 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.MediaDefinition
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about DVD media definitions.

    The following media types are accepted:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    Note that the capacity attribute returns capacity in terms of ISO sectors (util.ISO_SECTOR_SIZE). This is for compatibility with the CD writer functionality.

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      capacity
    Total capacity of media in 2048-byte sectors.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    capacity

    Total capacity of media in 2048-byte sectors.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup2-2.22.0/doc/interface/epydoc.css0000664000175000017500000003722712143054362022262 0ustar pronovicpronovic00000000000000 /* Epydoc CSS Stylesheet * * This stylesheet can be used to customize the appearance of epydoc's * HTML output. * */ /* Default Colors & Styles * - Set the default foreground & background color with 'body'; and * link colors with 'a:link' and 'a:visited'. * - Use bold for decision list terms. * - The heading styles defined here are used for headings *within* * docstring descriptions. All headings used by epydoc itself use * either class='epydoc' or class='toc' (CSS styles for both * defined below). */ body { background: #ffffff; color: #000000; } p { margin-top: 0.5em; margin-bottom: 0.5em; } a:link { color: #0000ff; } a:visited { color: #204080; } dt { font-weight: bold; } h1 { font-size: +140%; font-style: italic; font-weight: bold; } h2 { font-size: +125%; font-style: italic; font-weight: bold; } h3 { font-size: +110%; font-style: italic; font-weight: normal; } code { font-size: 100%; } /* N.B.: class, not pseudoclass */ a.link { font-family: monospace; } /* Page Header & Footer * - The standard page header consists of a navigation bar (with * pointers to standard pages such as 'home' and 'trees'); a * breadcrumbs list, which can be used to navigate to containing * classes or modules; options links, to show/hide private * variables and to show/hide frames; and a page title (using *

    ). The page title may be followed by a link to the * corresponding source code (using 'span.codelink'). * - The footer consists of a navigation bar, a timestamp, and a * pointer to epydoc's homepage. */ h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; } h2.epydoc { font-size: +130%; font-weight: bold; } h3.epydoc { font-size: +115%; font-weight: bold; margin-top: 0.2em; } td h3.epydoc { font-size: +115%; font-weight: bold; margin-bottom: 0; } table.navbar { background: #a0c0ff; color: #000000; border: 2px groove #c0d0d0; } table.navbar table { color: #000000; } th.navbar-select { background: #70b0ff; color: #000000; } table.navbar a { text-decoration: none; } table.navbar a:link { color: #0000ff; } table.navbar a:visited { color: #204080; } span.breadcrumbs { font-size: 85%; font-weight: bold; } span.options { font-size: 70%; } span.codelink { font-size: 85%; } td.footer { font-size: 85%; } /* Table Headers * - Each summary table and details section begins with a 'header' * row. This row contains a section title (marked by * 'span.table-header') as well as a show/hide private link * (marked by 'span.options', defined above). * - Summary tables that contain user-defined groups mark those * groups using 'group header' rows. */ td.table-header { background: #70b0ff; color: #000000; border: 1px solid #608090; } td.table-header table { color: #000000; } td.table-header table a:link { color: #0000ff; } td.table-header table a:visited { color: #204080; } span.table-header { font-size: 120%; font-weight: bold; } th.group-header { background: #c0e0f8; color: #000000; text-align: left; font-style: italic; font-size: 115%; border: 1px solid #608090; } /* Summary Tables (functions, variables, etc) * - Each object is described by a single row of the table with * two cells. The left cell gives the object's type, and is * marked with 'code.summary-type'. The right cell gives the * object's name and a summary description. * - CSS styles for the table's header and group headers are * defined above, under 'Table Headers' */ table.summary { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin-bottom: 0.5em; } td.summary { border: 1px solid #608090; } code.summary-type { font-size: 85%; } table.summary a:link { color: #0000ff; } table.summary a:visited { color: #204080; } /* Details Tables (functions, variables, etc) * - Each object is described in its own div. * - A single-row summary table w/ table-header is used as * a header for each details section (CSS style for table-header * is defined above, under 'Table Headers'). */ table.details { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } table.details table { color: #000000; } table.details a:link { color: #0000ff; } table.details a:visited { color: #204080; } /* Fields */ dl.fields { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; } dl.fields dd ul { margin-left: 0em; padding-left: 0em; } dl.fields dd ul li ul { margin-left: 2em; padding-left: 0em; } div.fields { margin-left: 2em; } div.fields p { margin-bottom: 0.5em; } /* Index tables (identifier index, term index, etc) * - link-index is used for indices containing lists of links * (namely, the identifier index & term index). * - index-where is used in link indices for the text indicating * the container/source for each link. * - metadata-index is used for indices containing metadata * extracted from fields (namely, the bug index & todo index). */ table.link-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; } td.link-index { border-width: 0px; } table.link-index a:link { color: #0000ff; } table.link-index a:visited { color: #204080; } span.index-where { font-size: 70%; } table.metadata-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } td.metadata-index { border-width: 1px; border-style: solid; } table.metadata-index a:link { color: #0000ff; } table.metadata-index a:visited { color: #204080; } /* Function signatures * - sig* is used for the signature in the details section. * - .summary-sig* is used for the signature in the summary * table, and when listing property accessor functions. * */ .sig-name { color: #006080; } .sig-arg { color: #008060; } .sig-default { color: #602000; } .summary-sig { font-family: monospace; } .summary-sig-name { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:link { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:visited { color: #006080; font-weight: bold; } .summary-sig-arg { color: #006040; } .summary-sig-default { color: #501800; } /* Subclass list */ ul.subclass-list { display: inline; } ul.subclass-list li { display: inline; } /* To render variables, classes etc. like functions */ table.summary .summary-name { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:link { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:visited { color: #006080; font-weight: bold; font-family: monospace; } /* Variable values * - In the 'variable details' sections, each varaible's value is * listed in a 'pre.variable' box. The width of this box is * restricted to 80 chars; if the value's repr is longer than * this it will be wrapped, using a backslash marked with * class 'variable-linewrap'. If the value's repr is longer * than 3 lines, the rest will be ellided; and an ellipsis * marker ('...' marked with 'variable-ellipsis') will be used. * - If the value is a string, its quote marks will be marked * with 'variable-quote'. * - If the variable is a regexp, it is syntax-highlighted using * the re* CSS classes. */ pre.variable { padding: .5em; margin: 0; background: #dce4ec; color: #000000; border: 1px solid #708890; } .variable-linewrap { color: #604000; font-weight: bold; } .variable-ellipsis { color: #604000; font-weight: bold; } .variable-quote { color: #604000; font-weight: bold; } .variable-group { color: #008000; font-weight: bold; } .variable-op { color: #604000; font-weight: bold; } .variable-string { color: #006030; } .variable-unknown { color: #a00000; font-weight: bold; } .re { color: #000000; } .re-char { color: #006030; } .re-op { color: #600000; } .re-group { color: #003060; } .re-ref { color: #404040; } /* Base tree * - Used by class pages to display the base class hierarchy. */ pre.base-tree { font-size: 80%; margin: 0; } /* Frames-based table of contents headers * - Consists of two frames: one for selecting modules; and * the other listing the contents of the selected module. * - h1.toc is used for each frame's heading * - h2.toc is used for subheadings within each frame. */ h1.toc { text-align: center; font-size: 105%; margin: 0; font-weight: bold; padding: 0; } h2.toc { font-size: 100%; font-weight: bold; margin: 0.5em 0 0 -0.3em; } /* Syntax Highlighting for Source Code * - doctest examples are displayed in a 'pre.py-doctest' block. * If the example is in a details table entry, then it will use * the colors specified by the 'table pre.py-doctest' line. * - Source code listings are displayed in a 'pre.py-src' block. * Each line is marked with 'span.py-line' (used to draw a line * down the left margin, separating the code from the line * numbers). Line numbers are displayed with 'span.py-lineno'. * The expand/collapse block toggle button is displayed with * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not * modify the font size of the text.) * - If a source code page is opened with an anchor, then the * corresponding code block will be highlighted. The code * block's header is highlighted with 'py-highlight-hdr'; and * the code block's body is highlighted with 'py-highlight'. * - The remaining py-* classes are used to perform syntax * highlighting (py-string for string literals, py-name for names, * etc.) */ pre.py-doctest { padding: .5em; margin: 1em; background: #e8f0f8; color: #000000; border: 1px solid #708890; } table pre.py-doctest { background: #dce4ec; color: #000000; } pre.py-src { border: 2px solid #000000; background: #f0f0f0; color: #000000; } .py-line { border-left: 2px solid #000000; margin-left: .2em; padding-left: .4em; } .py-lineno { font-style: italic; font-size: 90%; padding-left: .5em; } a.py-toggle { text-decoration: none; } div.py-highlight-hdr { border-top: 2px solid #000000; border-bottom: 2px solid #000000; background: #d8e8e8; } div.py-highlight { border-bottom: 2px solid #000000; background: #d0e0e0; } .py-prompt { color: #005050; font-weight: bold;} .py-more { color: #005050; font-weight: bold;} .py-string { color: #006030; } .py-comment { color: #003060; } .py-keyword { color: #600000; } .py-output { color: #404040; } .py-name { color: #000050; } .py-name:link { color: #000050 !important; } .py-name:visited { color: #000050 !important; } .py-number { color: #005000; } .py-defname { color: #000060; font-weight: bold; } .py-def-name { color: #000060; font-weight: bold; } .py-base-class { color: #000060; } .py-param { color: #000060; } .py-docstring { color: #006030; } .py-decorator { color: #804020; } /* Use this if you don't want links to names underlined: */ /*a.py-name { text-decoration: none; }*/ /* Graphs & Diagrams * - These CSS styles are used for graphs & diagrams generated using * Graphviz dot. 'img.graph-without-title' is used for bare * diagrams (to remove the border created by making the image * clickable). */ img.graph-without-title { border: none; } img.graph-with-title { border: 1px solid #000000; } span.graph-title { font-weight: bold; } span.graph-caption { } /* General-purpose classes * - 'p.indent-wrapped-lines' defines a paragraph whose first line * is not indented, but whose subsequent lines are. * - The 'nomargin-top' class is used to remove the top margin (e.g. * from lists). The 'nomargin' class is used to remove both the * top and bottom margin (but not the left or right margin -- * for lists, that would cause the bullets to disappear.) */ p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em; margin: 0; } .nomargin-top { margin-top: 0; } .nomargin { margin-top: 0; margin-bottom: 0; } /* HTML Log */ div.log-block { padding: 0; margin: .5em 0 .5em 0; background: #e8f0f8; color: #000000; border: 1px solid #000000; } div.log-error { padding: .1em .3em .1em .3em; margin: 4px; background: #ffb0b0; color: #000000; border: 1px solid #000000; } div.log-warning { padding: .1em .3em .1em .3em; margin: 4px; background: #ffffb0; color: #000000; border: 1px solid #000000; } div.log-info { padding: .1em .3em .1em .3em; margin: 4px; background: #b0ffb0; color: #000000; border: 1px solid #000000; } h2.log-hdr { background: #70b0ff; color: #000000; margin: 0; padding: 0em 0.5em 0em 0.5em; border-bottom: 1px solid #000000; font-size: 110%; } p.log { font-weight: bold; margin: .5em 0 .5em 0; } tr.opt-changed { color: #000000; font-weight: bold; } tr.opt-default { color: #606060; } pre.log { margin: 0; padding: 0; padding-left: 1em; } CedarBackup2-2.22.0/doc/interface/CedarBackup2.action-module.html0000664000175000017500000001325412143054362026132 0ustar pronovicpronovic00000000000000 CedarBackup2.action
    Package CedarBackup2 :: Module action
    [hide private]
    [frames] | no frames]

    Module action

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup2.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli-pysrc.html0000664000175000017500000236675712143054364025326 0ustar pronovicpronovic00000000000000 CedarBackup2.cli
    Package CedarBackup2 :: Module cli
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.cli

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Cedar Backup, release 2 
      30  # Revision : $Id: cli.py 1022 2011-10-11 23:27:49Z pronovic $ 
      31  # Purpose  : Provides command-line interface implementation. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides command-line interface implementation for the cback script. 
      41   
      42  Summary 
      43  ======= 
      44   
      45     The functionality in this module encapsulates the command-line interface for 
      46     the cback script.  The cback script itself is very short, basically just an 
      47     invokation of one function implemented here.  That, in turn, makes it 
      48     simpler to validate the command line interface (for instance, it's easier to 
      49     run pychecker against a module, and unit tests are easier, too). 
      50   
      51     The objects and functions implemented in this module are probably not useful 
      52     to any code external to Cedar Backup.   Anyone else implementing their own 
      53     command-line interface would have to reimplement (or at least enhance) all 
      54     of this anyway. 
      55   
      56  Backwards Compatibility 
      57  ======================= 
      58   
      59     The command line interface has changed between Cedar Backup 1.x and Cedar 
      60     Backup 2.x.  Some new switches have been added, and the actions have become 
      61     simple arguments rather than switches (which is a much more standard command 
      62     line format).  Old 1.x command lines are generally no longer valid. 
      63   
      64  @var DEFAULT_CONFIG: The default configuration file. 
      65  @var DEFAULT_LOGFILE: The default log file path. 
      66  @var DEFAULT_OWNERSHIP: Default ownership for the logfile. 
      67  @var DEFAULT_MODE: Default file permissions mode on the logfile. 
      68  @var VALID_ACTIONS: List of valid actions. 
      69  @var COMBINE_ACTIONS: List of actions which can be combined with other actions. 
      70  @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. 
      71   
      72  @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP,  
      73         DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS 
      74   
      75  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      76  """ 
      77   
      78  ######################################################################## 
      79  # Imported modules 
      80  ######################################################################## 
      81   
      82  # System modules 
      83  import sys 
      84  import os 
      85  import logging 
      86  import getopt 
      87   
      88  # Cedar Backup modules 
      89  from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
      90  from CedarBackup2.customize import customizeOverrides 
      91  from CedarBackup2.util import DirectedGraph, PathResolverSingleton 
      92  from CedarBackup2.util import sortDict, splitCommandLine, executeCommand, getFunctionReference 
      93  from CedarBackup2.util import getUidGid, encodePath, Diagnostics 
      94  from CedarBackup2.config import Config 
      95  from CedarBackup2.peer import RemotePeer 
      96  from CedarBackup2.actions.collect import executeCollect 
      97  from CedarBackup2.actions.stage import executeStage 
      98  from CedarBackup2.actions.store import executeStore 
      99  from CedarBackup2.actions.purge import executePurge 
     100  from CedarBackup2.actions.rebuild import executeRebuild 
     101  from CedarBackup2.actions.validate import executeValidate 
     102  from CedarBackup2.actions.initialize import executeInitialize 
     103   
     104   
     105  ######################################################################## 
     106  # Module-wide constants and variables 
     107  ######################################################################## 
     108   
     109  logger = logging.getLogger("CedarBackup2.log.cli") 
     110   
     111  DISK_LOG_FORMAT    = "%(asctime)s --> [%(levelname)-7s] %(message)s" 
     112  DISK_OUTPUT_FORMAT = "%(message)s" 
     113  SCREEN_LOG_FORMAT  = "%(message)s" 
     114  SCREEN_LOG_STREAM  = sys.stdout 
     115  DATE_FORMAT        = "%Y-%m-%dT%H:%M:%S %Z" 
     116   
     117  DEFAULT_CONFIG     = "/etc/cback.conf" 
     118  DEFAULT_LOGFILE    = "/var/log/cback.log" 
     119  DEFAULT_OWNERSHIP  = [ "root", "adm", ] 
     120  DEFAULT_MODE       = 0640 
     121   
     122  REBUILD_INDEX      = 0        # can't run with anything else, anyway 
     123  VALIDATE_INDEX     = 0        # can't run with anything else, anyway 
     124  INITIALIZE_INDEX   = 0        # can't run with anything else, anyway 
     125  COLLECT_INDEX      = 100 
     126  STAGE_INDEX        = 200 
     127  STORE_INDEX        = 300 
     128  PURGE_INDEX        = 400 
     129   
     130  VALID_ACTIONS      = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] 
     131  COMBINE_ACTIONS    = [ "collect", "stage", "store", "purge", ] 
     132  NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] 
     133   
     134  SHORT_SWITCHES     = "hVbqc:fMNl:o:m:OdsD" 
     135  LONG_SWITCHES      = [ 'help', 'version', 'verbose', 'quiet',  
     136                         'config=', 'full', 'managed', 'managed-only', 
     137                         'logfile=', 'owner=', 'mode=',  
     138                         'output', 'debug', 'stack', 'diagnostics', ] 
    
    139 140 141 ####################################################################### 142 # Public functions 143 ####################################################################### 144 145 ################# 146 # cli() function 147 ################# 148 149 -def cli():
    150 """ 151 Implements the command-line interface for the C{cback} script. 152 153 Essentially, this is the "main routine" for the cback script. It does all 154 of the argument processing for the script, and then sets about executing the 155 indicated actions. 156 157 As a general rule, only the actions indicated on the command line will be 158 executed. We will accept any of the built-in actions and any of the 159 configured extended actions (which makes action list verification a two- 160 step process). 161 162 The C{'all'} action has a special meaning: it means that the built-in set of 163 actions (collect, stage, store, purge) will all be executed, in that order. 164 Extended actions will be ignored as part of the C{'all'} action. 165 166 Raised exceptions always result in an immediate return. Otherwise, we 167 generally return when all specified actions have been completed. Actions 168 are ignored if the help, version or validate flags are set. 169 170 A different error code is returned for each type of failure: 171 172 - C{1}: The Python interpreter version is < 2.5 173 - C{2}: Error processing command-line arguments 174 - C{3}: Error configuring logging 175 - C{4}: Error parsing indicated configuration file 176 - C{5}: Backup was interrupted with a CTRL-C or similar 177 - C{6}: Error executing specified backup actions 178 179 @note: This function contains a good amount of logging at the INFO level, 180 because this is the right place to document high-level flow of control (i.e. 181 what the command-line options were, what config file was being used, etc.) 182 183 @note: We assume that anything that I{must} be seen on the screen is logged 184 at the ERROR level. Errors that occur before logging can be configured are 185 written to C{sys.stderr}. 186 187 @return: Error code as described above. 188 """ 189 try: 190 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5]: 191 sys.stderr.write("Python version 2.5 or greater required.\n") 192 return 1 193 except: 194 # sys.version_info isn't available before 2.0 195 sys.stderr.write("Python version 2.5 or greater required.\n") 196 return 1 197 198 try: 199 options = Options(argumentList=sys.argv[1:]) 200 logger.info("Specified command-line actions: " % options.actions) 201 except Exception, e: 202 _usage() 203 sys.stderr.write(" *** Error: %s\n" % e) 204 return 2 205 206 if options.help: 207 _usage() 208 return 0 209 if options.version: 210 _version() 211 return 0 212 if options.diagnostics: 213 _diagnostics() 214 return 0 215 216 try: 217 logfile = setupLogging(options) 218 except Exception, e: 219 sys.stderr.write("Error setting up logging: %s\n" % e) 220 return 3 221 222 logger.info("Cedar Backup run started.") 223 logger.info("Options were [%s]" % options) 224 logger.info("Logfile is [%s]" % logfile) 225 Diagnostics().logDiagnostics(method=logger.info) 226 227 if options.config is None: 228 logger.debug("Using default configuration file.") 229 configPath = DEFAULT_CONFIG 230 else: 231 logger.debug("Using user-supplied configuration file.") 232 configPath = options.config 233 234 executeLocal = True 235 executeManaged = False 236 if options.managedOnly: 237 executeLocal = False 238 executeManaged = True 239 if options.managed: 240 executeManaged = True 241 logger.debug("Execute local actions: %s" % executeLocal) 242 logger.debug("Execute managed actions: %s" % executeManaged) 243 244 try: 245 logger.info("Configuration path is [%s]" % configPath) 246 config = Config(xmlPath=configPath) 247 customizeOverrides(config) 248 setupPathResolver(config) 249 actionSet = _ActionSet(options.actions, config.extensions, config.options, 250 config.peers, executeManaged, executeLocal) 251 except Exception, e: 252 logger.error("Error reading or handling configuration: %s" % e) 253 logger.info("Cedar Backup run completed with status 4.") 254 return 4 255 256 if options.stacktrace: 257 actionSet.executeActions(configPath, options, config) 258 else: 259 try: 260 actionSet.executeActions(configPath, options, config) 261 except KeyboardInterrupt: 262 logger.error("Backup interrupted.") 263 logger.info("Cedar Backup run completed with status 5.") 264 return 5 265 except Exception, e: 266 logger.error("Error executing backup: %s" % e) 267 logger.info("Cedar Backup run completed with status 6.") 268 return 6 269 270 logger.info("Cedar Backup run completed with status 0.") 271 return 0
    272
    273 274 ######################################################################## 275 # Action-related class definition 276 ######################################################################## 277 278 #################### 279 # _ActionItem class 280 #################### 281 282 -class _ActionItem(object):
    283 284 """ 285 Class representing a single action to be executed. 286 287 This class represents a single named action to be executed, and understands 288 how to execute that action. 289 290 The built-in actions will use only the options and config values. We also 291 pass in the config path so that extension modules can re-parse configuration 292 if they want to, to add in extra information. 293 294 This class is also where pre-action and post-action hooks are executed. An 295 action item is instantiated in terms of optional pre- and post-action hook 296 objects (config.ActionHook), which are then executed at the appropriate time 297 (if set). 298 299 @note: The comparison operators for this class have been implemented to only 300 compare based on the index and SORT_ORDER value, and ignore all other 301 values. This is so that the action set list can be easily sorted first by 302 type (_ActionItem before _ManagedActionItem) and then by index within type. 303 304 @cvar SORT_ORDER: Defines a sort order to order properly between types. 305 """ 306 307 SORT_ORDER = 0 308
    309 - def __init__(self, index, name, preHook, postHook, function):
    310 """ 311 Default constructor. 312 313 It's OK to pass C{None} for C{index}, C{preHook} or C{postHook}, but not 314 for C{name}. 315 316 @param index: Index of the item (or C{None}). 317 @param name: Name of the action that is being executed. 318 @param preHook: Pre-action hook in terms of an C{ActionHook} object, or C{None}. 319 @param postHook: Post-action hook in terms of an C{ActionHook} object, or C{None}. 320 @param function: Reference to function associated with item. 321 """ 322 self.index = index 323 self.name = name 324 self.preHook = preHook 325 self.postHook = postHook 326 self.function = function
    327
    328 - def __cmp__(self, other):
    329 """ 330 Definition of equals operator for this class. 331 The only thing we compare is the item's index. 332 @param other: Other object to compare to. 333 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 334 """ 335 if other is None: 336 return 1 337 if self.index != other.index: 338 if self.index < other.index: 339 return -1 340 else: 341 return 1 342 else: 343 if self.SORT_ORDER != other.SORT_ORDER: 344 if self.SORT_ORDER < other.SORT_ORDER: 345 return -1 346 else: 347 return 1 348 return 0
    349
    350 - def executeAction(self, configPath, options, config):
    351 """ 352 Executes the action associated with an item, including hooks. 353 354 See class notes for more details on how the action is executed. 355 356 @param configPath: Path to configuration file on disk. 357 @param options: Command-line options to be passed to action. 358 @param config: Parsed configuration to be passed to action. 359 360 @raise Exception: If there is a problem executing the action. 361 """ 362 logger.debug("Executing [%s] action." % self.name) 363 if self.preHook is not None: 364 self._executeHook("pre-action", self.preHook) 365 self._executeAction(configPath, options, config) 366 if self.postHook is not None: 367 self._executeHook("post-action", self.postHook)
    368
    369 - def _executeAction(self, configPath, options, config):
    370 """ 371 Executes the action, specifically the function associated with the action. 372 @param configPath: Path to configuration file on disk. 373 @param options: Command-line options to be passed to action. 374 @param config: Parsed configuration to be passed to action. 375 """ 376 name = "%s.%s" % (self.function.__module__, self.function.__name__) 377 logger.debug("Calling action function [%s], execution index [%d]" % (name, self.index)) 378 self.function(configPath, options, config)
    379
    380 - def _executeHook(self, type, hook): # pylint: disable=W0622,R0201
    381 """ 382 Executes a hook command via L{util.executeCommand()}. 383 @param type: String describing the type of hook, for logging. 384 @param hook: Hook, in terms of a C{ActionHook} object. 385 """ 386 logger.debug("Executing %s hook for action [%s]." % (type, hook.action)) 387 fields = splitCommandLine(hook.command) 388 executeCommand(command=fields[0:1], args=fields[1:])
    389
    390 391 ########################### 392 # _ManagedActionItem class 393 ########################### 394 395 -class _ManagedActionItem(object):
    396 397 """ 398 Class representing a single action to be executed on a managed peer. 399 400 This class represents a single named action to be executed, and understands 401 how to execute that action. 402 403 Actions to be executed on a managed peer rely on peer configuration and 404 on the full-backup flag. All other configuration takes place on the remote 405 peer itself. 406 407 @note: The comparison operators for this class have been implemented to only 408 compare based on the index and SORT_ORDER value, and ignore all other 409 values. This is so that the action set list can be easily sorted first by 410 type (_ActionItem before _ManagedActionItem) and then by index within type. 411 412 @cvar SORT_ORDER: Defines a sort order to order properly between types. 413 """ 414 415 SORT_ORDER = 1 416
    417 - def __init__(self, index, name, remotePeers):
    418 """ 419 Default constructor. 420 421 @param index: Index of the item (or C{None}). 422 @param name: Name of the action that is being executed. 423 @param remotePeers: List of remote peers on which to execute the action. 424 """ 425 self.index = index 426 self.name = name 427 self.remotePeers = remotePeers
    428
    429 - def __cmp__(self, other):
    430 """ 431 Definition of equals operator for this class. 432 The only thing we compare is the item's index. 433 @param other: Other object to compare to. 434 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 435 """ 436 if other is None: 437 return 1 438 if self.index != other.index: 439 if self.index < other.index: 440 return -1 441 else: 442 return 1 443 else: 444 if self.SORT_ORDER != other.SORT_ORDER: 445 if self.SORT_ORDER < other.SORT_ORDER: 446 return -1 447 else: 448 return 1 449 return 0
    450
    451 - def executeAction(self, configPath, options, config):
    452 """ 453 Executes the managed action associated with an item. 454 455 @note: Only options.full is actually used. The rest of the arguments 456 exist to satisfy the ActionItem iterface. 457 458 @note: Errors here result in a message logged to ERROR, but no thrown 459 exception. The analogy is the stage action where a problem with one host 460 should not kill the entire backup. Since we're logging an error, the 461 administrator will get an email. 462 463 @param configPath: Path to configuration file on disk. 464 @param options: Command-line options to be passed to action. 465 @param config: Parsed configuration to be passed to action. 466 467 @raise Exception: If there is a problem executing the action. 468 """ 469 for peer in self.remotePeers: 470 logger.debug("Executing managed action [%s] on peer [%s]." % (self.name, peer.name)) 471 try: 472 peer.executeManagedAction(self.name, options.full) 473 except IOError, e: 474 logger.error(e) # log the message and go on, so we don't kill the backup
    475
    476 477 ################### 478 # _ActionSet class 479 ################### 480 481 -class _ActionSet(object):
    482 483 """ 484 Class representing a set of local actions to be executed. 485 486 This class does four different things. First, it ensures that the actions 487 specified on the command-line are sensible. The command-line can only list 488 either built-in actions or extended actions specified in configuration. 489 Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with 490 other actions. 491 492 Second, the class enforces an execution order on the specified actions. Any 493 time actions are combined on the command line (either built-in actions or 494 extended actions), we must make sure they get executed in a sensible order. 495 496 Third, the class ensures that any pre-action or post-action hooks are 497 scheduled and executed appropriately. Hooks are configured by building a 498 dictionary mapping between hook action name and command. Pre-action hooks 499 are executed immediately before their associated action, and post-action 500 hooks are executed immediately after their associated action. 501 502 Finally, the class properly interleaves local and managed actions so that 503 the same action gets executed first locally and then on managed peers. 504 505 @sort: __init__, executeActions 506 """ 507
    508 - def __init__(self, actions, extensions, options, peers, managed, local):
    509 """ 510 Constructor for the C{_ActionSet} class. 511 512 This is kind of ugly, because the constructor has to set up a lot of data 513 before being able to do anything useful. The following data structures 514 are initialized based on the input: 515 516 - C{extensionNames}: List of extensions available in configuration 517 - C{preHookMap}: Mapping from action name to pre C{ActionHook} 518 - C{preHookMap}: Mapping from action name to post C{ActionHook} 519 - C{functionMap}: Mapping from action name to Python function 520 - C{indexMap}: Mapping from action name to execution index 521 - C{peerMap}: Mapping from action name to set of C{RemotePeer} 522 - C{actionMap}: Mapping from action name to C{_ActionItem} 523 524 Once these data structures are set up, the command line is validated to 525 make sure only valid actions have been requested, and in a sensible 526 combination. Then, all of the data is used to build C{self.actionSet}, 527 the set action items to be executed by C{executeActions()}. This list 528 might contain either C{_ActionItem} or C{_ManagedActionItem}. 529 530 @param actions: Names of actions specified on the command-line. 531 @param extensions: Extended action configuration (i.e. config.extensions) 532 @param options: Options configuration (i.e. config.options) 533 @param peers: Peers configuration (i.e. config.peers) 534 @param managed: Whether to include managed actions in the set 535 @param local: Whether to include local actions in the set 536 537 @raise ValueError: If one of the specified actions is invalid. 538 """ 539 extensionNames = _ActionSet._deriveExtensionNames(extensions) 540 (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) 541 functionMap = _ActionSet._buildFunctionMap(extensions) 542 indexMap = _ActionSet._buildIndexMap(extensions) 543 peerMap = _ActionSet._buildPeerMap(options, peers) 544 actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, 545 indexMap, preHookMap, postHookMap, peerMap) 546 _ActionSet._validateActions(actions, extensionNames) 547 self.actionSet = _ActionSet._buildActionSet(actions, actionMap)
    548 549 @staticmethod
    550 - def _deriveExtensionNames(extensions):
    551 """ 552 Builds a list of extended actions that are available in configuration. 553 @param extensions: Extended action configuration (i.e. config.extensions) 554 @return: List of extended action names. 555 """ 556 extensionNames = [] 557 if extensions is not None and extensions.actions is not None: 558 for action in extensions.actions: 559 extensionNames.append(action.name) 560 return extensionNames
    561 562 @staticmethod
    563 - def _buildHookMaps(hooks):
    564 """ 565 Build two mappings from action name to configured C{ActionHook}. 566 @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) 567 @return: Tuple of (pre hook dictionary, post hook dictionary). 568 """ 569 preHookMap = {} 570 postHookMap = {} 571 if hooks is not None: 572 for hook in hooks: 573 if hook.before: 574 preHookMap[hook.action] = hook 575 elif hook.after: 576 postHookMap[hook.action] = hook 577 return (preHookMap, postHookMap)
    578 579 @staticmethod
    580 - def _buildFunctionMap(extensions):
    581 """ 582 Builds a mapping from named action to action function. 583 @param extensions: Extended action configuration (i.e. config.extensions) 584 @return: Dictionary mapping action to function. 585 """ 586 functionMap = {} 587 functionMap['rebuild'] = executeRebuild 588 functionMap['validate'] = executeValidate 589 functionMap['initialize'] = executeInitialize 590 functionMap['collect'] = executeCollect 591 functionMap['stage'] = executeStage 592 functionMap['store'] = executeStore 593 functionMap['purge'] = executePurge 594 if extensions is not None and extensions.actions is not None: 595 for action in extensions.actions: 596 functionMap[action.name] = getFunctionReference(action.module, action.function) 597 return functionMap
    598 599 @staticmethod
    600 - def _buildIndexMap(extensions):
    601 """ 602 Builds a mapping from action name to proper execution index. 603 604 If extensions configuration is C{None}, or there are no configured 605 extended actions, the ordering dictionary will only include the built-in 606 actions and their standard indices. 607 608 Otherwise, if the extensions order mode is C{None} or C{"index"}, actions 609 will scheduled by explicit index; and if the extensions order mode is 610 C{"dependency"}, actions will be scheduled using a dependency graph. 611 612 @param extensions: Extended action configuration (i.e. config.extensions) 613 614 @return: Dictionary mapping action name to integer execution index. 615 """ 616 indexMap = {} 617 if extensions is None or extensions.actions is None or extensions.actions == []: 618 logger.info("Action ordering will use 'index' order mode.") 619 indexMap['rebuild'] = REBUILD_INDEX 620 indexMap['validate'] = VALIDATE_INDEX 621 indexMap['initialize'] = INITIALIZE_INDEX 622 indexMap['collect'] = COLLECT_INDEX 623 indexMap['stage'] = STAGE_INDEX 624 indexMap['store'] = STORE_INDEX 625 indexMap['purge'] = PURGE_INDEX 626 logger.debug("Completed filling in action indices for built-in actions.") 627 logger.info("Action order will be: %s" % sortDict(indexMap)) 628 else: 629 if extensions.orderMode is None or extensions.orderMode == "index": 630 logger.info("Action ordering will use 'index' order mode.") 631 indexMap['rebuild'] = REBUILD_INDEX 632 indexMap['validate'] = VALIDATE_INDEX 633 indexMap['initialize'] = INITIALIZE_INDEX 634 indexMap['collect'] = COLLECT_INDEX 635 indexMap['stage'] = STAGE_INDEX 636 indexMap['store'] = STORE_INDEX 637 indexMap['purge'] = PURGE_INDEX 638 logger.debug("Completed filling in action indices for built-in actions.") 639 for action in extensions.actions: 640 indexMap[action.name] = action.index 641 logger.debug("Completed filling in action indices for extended actions.") 642 logger.info("Action order will be: %s" % sortDict(indexMap)) 643 else: 644 logger.info("Action ordering will use 'dependency' order mode.") 645 graph = DirectedGraph("dependencies") 646 graph.createVertex("rebuild") 647 graph.createVertex("validate") 648 graph.createVertex("initialize") 649 graph.createVertex("collect") 650 graph.createVertex("stage") 651 graph.createVertex("store") 652 graph.createVertex("purge") 653 for action in extensions.actions: 654 graph.createVertex(action.name) 655 graph.createEdge("collect", "stage") # Collect must run before stage, store or purge 656 graph.createEdge("collect", "store") 657 graph.createEdge("collect", "purge") 658 graph.createEdge("stage", "store") # Stage must run before store or purge 659 graph.createEdge("stage", "purge") 660 graph.createEdge("store", "purge") # Store must run before purge 661 for action in extensions.actions: 662 if action.dependencies.beforeList is not None: 663 for vertex in action.dependencies.beforeList: 664 try: 665 graph.createEdge(action.name, vertex) # actions that this action must be run before 666 except ValueError: 667 logger.error("Dependency [%s] on extension [%s] is unknown." % (vertex, action.name)) 668 raise ValueError("Unable to determine proper action order due to invalid dependency.") 669 if action.dependencies.afterList is not None: 670 for vertex in action.dependencies.afterList: 671 try: 672 graph.createEdge(vertex, action.name) # actions that this action must be run after 673 except ValueError: 674 logger.error("Dependency [%s] on extension [%s] is unknown." % (vertex, action.name)) 675 raise ValueError("Unable to determine proper action order due to invalid dependency.") 676 try: 677 ordering = graph.topologicalSort() 678 indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) 679 logger.info("Action order will be: %s" % ordering) 680 except ValueError: 681 logger.error("Unable to determine proper action order due to dependency recursion.") 682 logger.error("Extensions configuration is invalid (check for loops).") 683 raise ValueError("Unable to determine proper action order due to dependency recursion.") 684 return indexMap
    685 686 @staticmethod
    687 - def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap):
    688 """ 689 Builds a mapping from action name to list of action items. 690 691 We build either C{_ActionItem} or C{_ManagedActionItem} objects here. 692 693 In most cases, the mapping from action name to C{_ActionItem} is 1:1. 694 The exception is the "all" action, which is a special case. However, a 695 list is returned in all cases, just for consistency later. Each 696 C{_ActionItem} will be created with a proper function reference and index 697 value for execution ordering. 698 699 The mapping from action name to C{_ManagedActionItem} is always 1:1. 700 Each managed action item contains a list of peers which the action should 701 be executed. 702 703 @param managed: Whether to include managed actions in the set 704 @param local: Whether to include local actions in the set 705 @param extensionNames: List of valid extended action names 706 @param functionMap: Dictionary mapping action name to Python function 707 @param indexMap: Dictionary mapping action name to integer execution index 708 @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action 709 @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action 710 @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action 711 712 @return: Dictionary mapping action name to list of C{_ActionItem} objects. 713 """ 714 actionMap = {} 715 for name in extensionNames + VALID_ACTIONS: 716 if name != 'all': # do this one later 717 function = functionMap[name] 718 index = indexMap[name] 719 actionMap[name] = [] 720 if local: 721 (preHook, postHook) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) 722 actionMap[name].append(_ActionItem(index, name, preHook, postHook, function)) 723 if managed: 724 if name in peerMap: 725 actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) 726 actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] 727 return actionMap
    728 729 @staticmethod
    730 - def _buildPeerMap(options, peers):
    731 """ 732 Build a mapping from action name to list of remote peers. 733 734 There will be one entry in the mapping for each managed action. If there 735 are no managed peers, the mapping will be empty. Only managed actions 736 will be listed in the mapping. 737 738 @param options: Option configuration (i.e. config.options) 739 @param peers: Peers configuration (i.e. config.peers) 740 """ 741 peerMap = {} 742 if peers is not None: 743 if peers.remotePeers is not None: 744 for peer in peers.remotePeers: 745 if peer.managed: 746 remoteUser = _ActionSet._getRemoteUser(options, peer) 747 rshCommand = _ActionSet._getRshCommand(options, peer) 748 cbackCommand = _ActionSet._getCbackCommand(options, peer) 749 managedActions = _ActionSet._getManagedActions(options, peer) 750 remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, 751 options.backupUser, rshCommand, cbackCommand) 752 if managedActions is not None: 753 for managedAction in managedActions: 754 if managedAction in peerMap: 755 if remotePeer not in peerMap[managedAction]: 756 peerMap[managedAction].append(remotePeer) 757 else: 758 peerMap[managedAction] = [ remotePeer, ] 759 return peerMap
    760 761 @staticmethod
    762 - def _deriveHooks(action, preHookDict, postHookDict):
    763 """ 764 Derive pre- and post-action hooks, if any, associated with named action. 765 @param action: Name of action to look up 766 @param preHookDict: Dictionary mapping pre-action hooks to action name 767 @param postHookDict: Dictionary mapping post-action hooks to action name 768 @return Tuple (preHook, postHook) per mapping, with None values if there is no hook. 769 """ 770 preHook = None 771 postHook = None 772 if preHookDict.has_key(action): 773 preHook = preHookDict[action] 774 if postHookDict.has_key(action): 775 postHook = postHookDict[action] 776 return (preHook, postHook)
    777 778 @staticmethod
    779 - def _validateActions(actions, extensionNames):
    780 """ 781 Validate that the set of specified actions is sensible. 782 783 Any specified action must either be a built-in action or must be among 784 the extended actions defined in configuration. The actions from within 785 L{NONCOMBINE_ACTIONS} may not be combined with other actions. 786 787 @param actions: Names of actions specified on the command-line. 788 @param extensionNames: Names of extensions specified in configuration. 789 790 @raise ValueError: If one or more configured actions are not valid. 791 """ 792 if actions is None or actions == []: 793 raise ValueError("No actions specified.") 794 for action in actions: 795 if action not in VALID_ACTIONS and action not in extensionNames: 796 raise ValueError("Action [%s] is not a valid action or extended action." % action) 797 for action in NONCOMBINE_ACTIONS: 798 if action in actions and actions != [ action, ]: 799 raise ValueError("Action [%s] may not be combined with other actions." % action)
    800 801 @staticmethod
    802 - def _buildActionSet(actions, actionMap):
    803 """ 804 Build set of actions to be executed. 805 806 The set of actions is built in the proper order, so C{executeActions} can 807 spin through the set without thinking about it. Since we've already validated 808 that the set of actions is sensible, we don't take any precautions here to 809 make sure things are combined properly. If the action is listed, it will 810 be "scheduled" for execution. 811 812 @param actions: Names of actions specified on the command-line. 813 @param actionMap: Dictionary mapping action name to C{_ActionItem} object. 814 815 @return: Set of action items in proper order. 816 """ 817 actionSet = [] 818 for action in actions: 819 actionSet.extend(actionMap[action]) 820 actionSet.sort() # sort the actions in order by index 821 return actionSet
    822
    823 - def executeActions(self, configPath, options, config):
    824 """ 825 Executes all actions and extended actions, in the proper order. 826 827 Each action (whether built-in or extension) is executed in an identical 828 manner. The built-in actions will use only the options and config 829 values. We also pass in the config path so that extension modules can 830 re-parse configuration if they want to, to add in extra information. 831 832 @param configPath: Path to configuration file on disk. 833 @param options: Command-line options to be passed to action functions. 834 @param config: Parsed configuration to be passed to action functions. 835 836 @raise Exception: If there is a problem executing the actions. 837 """ 838 logger.debug("Executing local actions.") 839 for actionItem in self.actionSet: 840 actionItem.executeAction(configPath, options, config)
    841 842 @staticmethod
    843 - def _getRemoteUser(options, remotePeer):
    844 """ 845 Gets the remote user associated with a remote peer. 846 Use peer's if possible, otherwise take from options section. 847 @param options: OptionsConfig object, as from config.options 848 @param remotePeer: Configuration-style remote peer object. 849 @return: Name of remote user associated with remote peer. 850 """ 851 if remotePeer.remoteUser is None: 852 return options.backupUser 853 return remotePeer.remoteUser
    854 855 @staticmethod
    856 - def _getRshCommand(options, remotePeer):
    857 """ 858 Gets the RSH command associated with a remote peer. 859 Use peer's if possible, otherwise take from options section. 860 @param options: OptionsConfig object, as from config.options 861 @param remotePeer: Configuration-style remote peer object. 862 @return: RSH command associated with remote peer. 863 """ 864 if remotePeer.rshCommand is None: 865 return options.rshCommand 866 return remotePeer.rshCommand
    867 868 @staticmethod
    869 - def _getCbackCommand(options, remotePeer):
    870 """ 871 Gets the cback command associated with a remote peer. 872 Use peer's if possible, otherwise take from options section. 873 @param options: OptionsConfig object, as from config.options 874 @param remotePeer: Configuration-style remote peer object. 875 @return: cback command associated with remote peer. 876 """ 877 if remotePeer.cbackCommand is None: 878 return options.cbackCommand 879 return remotePeer.cbackCommand
    880 881 @staticmethod
    882 - def _getManagedActions(options, remotePeer):
    883 """ 884 Gets the managed actions list associated with a remote peer. 885 Use peer's if possible, otherwise take from options section. 886 @param options: OptionsConfig object, as from config.options 887 @param remotePeer: Configuration-style remote peer object. 888 @return: Set of managed actions associated with remote peer. 889 """ 890 if remotePeer.managedActions is None: 891 return options.managedActions 892 return remotePeer.managedActions
    893
    894 895 ####################################################################### 896 # Utility functions 897 ####################################################################### 898 899 #################### 900 # _usage() function 901 #################### 902 903 -def _usage(fd=sys.stderr):
    904 """ 905 Prints usage information for the cback script. 906 @param fd: File descriptor used to print information. 907 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 908 """ 909 fd.write("\n") 910 fd.write(" Usage: cback [switches] action(s)\n") 911 fd.write("\n") 912 fd.write(" The following switches are accepted:\n") 913 fd.write("\n") 914 fd.write(" -h, --help Display this usage/help listing\n") 915 fd.write(" -V, --version Display version information\n") 916 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 917 fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") 918 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 919 fd.write(" -f, --full Perform a full backup, regardless of configuration\n") 920 fd.write(" -M, --managed Include managed clients when executing actions\n") 921 fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") 922 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 923 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 924 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 925 fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") 926 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 927 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! 928 fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") 929 fd.write("\n") 930 fd.write(" The following actions may be specified:\n") 931 fd.write("\n") 932 fd.write(" all Take all normal actions (collect, stage, store, purge)\n") 933 fd.write(" collect Take the collect action\n") 934 fd.write(" stage Take the stage action\n") 935 fd.write(" store Take the store action\n") 936 fd.write(" purge Take the purge action\n") 937 fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") 938 fd.write(" validate Validate configuration only\n") 939 fd.write(" initialize Initialize media for use with Cedar Backup\n") 940 fd.write("\n") 941 fd.write(" You may also specify extended actions that have been defined in\n") 942 fd.write(" configuration.\n") 943 fd.write("\n") 944 fd.write(" You must specify at least one action to take. More than one of\n") 945 fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") 946 fd.write(" extended actions may be specified in any arbitrary order; they\n") 947 fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") 948 fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") 949 fd.write(" other actions.\n") 950 fd.write("\n")
    951
    952 953 ###################### 954 # _version() function 955 ###################### 956 957 -def _version(fd=sys.stdout):
    958 """ 959 Prints version information for the cback script. 960 @param fd: File descriptor used to print information. 961 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 962 """ 963 fd.write("\n") 964 fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 965 fd.write("\n") 966 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 967 fd.write(" See CREDITS for a list of included code and other contributors.\n") 968 fd.write(" This is free software; there is NO warranty. See the\n") 969 fd.write(" GNU General Public License version 2 for copying conditions.\n") 970 fd.write("\n") 971 fd.write(" Use the --help option for usage information.\n") 972 fd.write("\n")
    973
    974 975 ########################## 976 # _diagnostics() function 977 ########################## 978 979 -def _diagnostics(fd=sys.stdout):
    980 """ 981 Prints runtime diagnostics information. 982 @param fd: File descriptor used to print information. 983 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 984 """ 985 fd.write("\n") 986 fd.write("Diagnostics:\n") 987 fd.write("\n") 988 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 989 fd.write("\n")
    990
    991 992 ########################## 993 # setupLogging() function 994 ########################## 995 996 -def setupLogging(options):
    997 """ 998 Set up logging based on command-line options. 999 1000 There are two kinds of logging: flow logging and output logging. Output 1001 logging contains information about system commands executed by Cedar Backup, 1002 for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging 1003 contains error and informational messages used to understand program flow. 1004 Flow log messages and output log messages are written to two different 1005 loggers target (C{CedarBackup2.log} and C{CedarBackup2.output}). Flow log 1006 messages are written at the ERROR, INFO and DEBUG log levels, while output 1007 log messages are generally only written at the INFO log level. 1008 1009 By default, output logging is disabled. When the C{options.output} or 1010 C{options.debug} flags are set, output logging will be written to the 1011 configured logfile. Output logging is never written to the screen. 1012 1013 By default, flow logging is enabled at the ERROR level to the screen and at 1014 the INFO level to the configured logfile. If the C{options.quiet} flag is 1015 set, flow logging is enabled at the INFO level to the configured logfile 1016 only (i.e. no output will be sent to the screen). If the C{options.verbose} 1017 flag is set, flow logging is enabled at the INFO level to both the screen 1018 and the configured logfile. If the C{options.debug} flag is set, flow 1019 logging is enabled at the DEBUG level to both the screen and the configured 1020 logfile. 1021 1022 @param options: Command-line options. 1023 @type options: L{Options} object 1024 1025 @return: Path to logfile on disk. 1026 """ 1027 logfile = _setupLogfile(options) 1028 _setupFlowLogging(logfile, options) 1029 _setupOutputLogging(logfile, options) 1030 return logfile
    1031
    1032 -def _setupLogfile(options):
    1033 """ 1034 Sets up and creates logfile as needed. 1035 1036 If the logfile already exists on disk, it will be left as-is, under the 1037 assumption that it was created with appropriate ownership and permissions. 1038 If the logfile does not exist on disk, it will be created as an empty file. 1039 Ownership and permissions will remain at their defaults unless user/group 1040 and/or mode are set in the options. We ignore errors setting the indicated 1041 user and group. 1042 1043 @note: This function is vulnerable to a race condition. If the log file 1044 does not exist when the function is run, it will attempt to create the file 1045 as safely as possible (using C{O_CREAT}). If two processes attempt to 1046 create the file at the same time, then one of them will fail. In practice, 1047 this shouldn't really be a problem, but it might happen occassionally if two 1048 instances of cback run concurrently or if cback collides with logrotate or 1049 something. 1050 1051 @param options: Command-line options. 1052 1053 @return: Path to logfile on disk. 1054 """ 1055 if options.logfile is None: 1056 logfile = DEFAULT_LOGFILE 1057 else: 1058 logfile = options.logfile 1059 if not os.path.exists(logfile): 1060 if options.mode is None: 1061 os.fdopen(os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, DEFAULT_MODE), "a+").write("") 1062 else: 1063 os.fdopen(os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, options.mode), "a+").write("") 1064 try: 1065 if options.owner is None or len(options.owner) < 2: 1066 (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) 1067 else: 1068 (uid, gid) = getUidGid(options.owner[0], options.owner[1]) 1069 os.chown(logfile, uid, gid) 1070 except: pass 1071 return logfile
    1072
    1073 -def _setupFlowLogging(logfile, options):
    1074 """ 1075 Sets up flow logging. 1076 @param logfile: Path to logfile on disk. 1077 @param options: Command-line options. 1078 """ 1079 flowLogger = logging.getLogger("CedarBackup2.log") 1080 flowLogger.setLevel(logging.DEBUG) # let the logger see all messages 1081 _setupDiskFlowLogging(flowLogger, logfile, options) 1082 _setupScreenFlowLogging(flowLogger, options)
    1083
    1084 -def _setupOutputLogging(logfile, options):
    1085 """ 1086 Sets up command output logging. 1087 @param logfile: Path to logfile on disk. 1088 @param options: Command-line options. 1089 """ 1090 outputLogger = logging.getLogger("CedarBackup2.output") 1091 outputLogger.setLevel(logging.DEBUG) # let the logger see all messages 1092 _setupDiskOutputLogging(outputLogger, logfile, options)
    1093
    1094 -def _setupDiskFlowLogging(flowLogger, logfile, options):
    1095 """ 1096 Sets up on-disk flow logging. 1097 @param flowLogger: Python flow logger object. 1098 @param logfile: Path to logfile on disk. 1099 @param options: Command-line options. 1100 """ 1101 formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) 1102 handler = logging.FileHandler(logfile, mode="a") 1103 handler.setFormatter(formatter) 1104 if options.debug: 1105 handler.setLevel(logging.DEBUG) 1106 else: 1107 handler.setLevel(logging.INFO) 1108 flowLogger.addHandler(handler)
    1109
    1110 -def _setupScreenFlowLogging(flowLogger, options):
    1111 """ 1112 Sets up on-screen flow logging. 1113 @param flowLogger: Python flow logger object. 1114 @param options: Command-line options. 1115 """ 1116 formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) 1117 handler = logging.StreamHandler(SCREEN_LOG_STREAM) 1118 handler.setFormatter(formatter) 1119 if options.quiet: 1120 handler.setLevel(logging.CRITICAL) # effectively turn it off 1121 elif options.verbose: 1122 if options.debug: 1123 handler.setLevel(logging.DEBUG) 1124 else: 1125 handler.setLevel(logging.INFO) 1126 else: 1127 handler.setLevel(logging.ERROR) 1128 flowLogger.addHandler(handler)
    1129
    1130 -def _setupDiskOutputLogging(outputLogger, logfile, options):
    1131 """ 1132 Sets up on-disk command output logging. 1133 @param outputLogger: Python command output logger object. 1134 @param logfile: Path to logfile on disk. 1135 @param options: Command-line options. 1136 """ 1137 formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) 1138 handler = logging.FileHandler(logfile, mode="a") 1139 handler.setFormatter(formatter) 1140 if options.debug or options.output: 1141 handler.setLevel(logging.DEBUG) 1142 else: 1143 handler.setLevel(logging.CRITICAL) # effectively turn it off 1144 outputLogger.addHandler(handler)
    1145
    1146 1147 ############################### 1148 # setupPathResolver() function 1149 ############################### 1150 1151 -def setupPathResolver(config):
    1152 """ 1153 Set up the path resolver singleton based on configuration. 1154 1155 Cedar Backup's path resolver is implemented in terms of a singleton, the 1156 L{PathResolverSingleton} class. This function takes options configuration, 1157 converts it into the dictionary form needed by the singleton, and then 1158 initializes the singleton. After that, any function that needs to resolve 1159 the path of a command can use the singleton. 1160 1161 @param config: Configuration 1162 @type config: L{Config} object 1163 """ 1164 mapping = {} 1165 if config.options.overrides is not None: 1166 for override in config.options.overrides: 1167 mapping[override.command] = override.absolutePath 1168 singleton = PathResolverSingleton() 1169 singleton.fill(mapping)
    1170
    1171 1172 ######################################################################### 1173 # Options class definition 1174 ######################################################################## 1175 1176 -class Options(object):
    1177 1178 ###################### 1179 # Class documentation 1180 ###################### 1181 1182 """ 1183 Class representing command-line options for the cback script. 1184 1185 The C{Options} class is a Python object representation of the command-line 1186 options of the cback script. 1187 1188 The object representation is two-way: a command line string or a list of 1189 command line arguments can be used to create an C{Options} object, and then 1190 changes to the object can be propogated back to a list of command-line 1191 arguments or to a command-line string. An C{Options} object can even be 1192 created from scratch programmatically (if you have a need for that). 1193 1194 There are two main levels of validation in the C{Options} class. The first 1195 is field-level validation. Field-level validation comes into play when a 1196 given field in an object is assigned to or updated. We use Python's 1197 C{property} functionality to enforce specific validations on field values, 1198 and in some places we even use customized list classes to enforce 1199 validations on list members. You should expect to catch a C{ValueError} 1200 exception when making assignments to fields if you are programmatically 1201 filling an object. 1202 1203 The second level of validation is post-completion validation. Certain 1204 validations don't make sense until an object representation of options is 1205 fully "complete". We don't want these validations to apply all of the time, 1206 because it would make building up a valid object from scratch a real pain. 1207 For instance, we might have to do things in the right order to keep from 1208 throwing exceptions, etc. 1209 1210 All of these post-completion validations are encapsulated in the 1211 L{Options.validate} method. This method can be called at any time by a 1212 client, and will always be called immediately after creating a C{Options} 1213 object from a command line and before exporting a C{Options} object back to 1214 a command line. This way, we get acceptable ease-of-use but we also don't 1215 accept or emit invalid command lines. 1216 1217 @note: Lists within this class are "unordered" for equality comparisons. 1218 1219 @sort: __init__, __repr__, __str__, __cmp__ 1220 """ 1221 1222 ############## 1223 # Constructor 1224 ############## 1225
    1226 - def __init__(self, argumentList=None, argumentString=None, validate=True):
    1227 """ 1228 Initializes an options object. 1229 1230 If you initialize the object without passing either C{argumentList} or 1231 C{argumentString}, the object will be empty and will be invalid until it 1232 is filled in properly. 1233 1234 No reference to the original arguments is saved off by this class. Once 1235 the data has been parsed (successfully or not) this original information 1236 is discarded. 1237 1238 The argument list is assumed to be a list of arguments, not including the 1239 name of the command, something like C{sys.argv[1:]}. If you pass 1240 C{sys.argv} instead, things are not going to work. 1241 1242 The argument string will be parsed into an argument list by the 1243 L{util.splitCommandLine} function (see the documentation for that 1244 function for some important notes about its limitations). There is an 1245 assumption that the resulting list will be equivalent to C{sys.argv[1:]}, 1246 just like C{argumentList}. 1247 1248 Unless the C{validate} argument is C{False}, the L{Options.validate} 1249 method will be called (with its default arguments) after successfully 1250 parsing any passed-in command line. This validation ensures that 1251 appropriate actions, etc. have been specified. Keep in mind that even if 1252 C{validate} is C{False}, it might not be possible to parse the passed-in 1253 command line, so an exception might still be raised. 1254 1255 @note: The command line format is specified by the L{_usage} function. 1256 Call L{_usage} to see a usage statement for the cback script. 1257 1258 @note: It is strongly suggested that the C{validate} option always be set 1259 to C{True} (the default) unless there is a specific need to read in 1260 invalid command line arguments. 1261 1262 @param argumentList: Command line for a program. 1263 @type argumentList: List of arguments, i.e. C{sys.argv} 1264 1265 @param argumentString: Command line for a program. 1266 @type argumentString: String, i.e. "cback --verbose stage store" 1267 1268 @param validate: Validate the command line after parsing it. 1269 @type validate: Boolean true/false. 1270 1271 @raise getopt.GetoptError: If the command-line arguments could not be parsed. 1272 @raise ValueError: If the command-line arguments are invalid. 1273 """ 1274 self._help = False 1275 self._version = False 1276 self._verbose = False 1277 self._quiet = False 1278 self._config = None 1279 self._full = False 1280 self._managed = False 1281 self._managedOnly = False 1282 self._logfile = None 1283 self._owner = None 1284 self._mode = None 1285 self._output = False 1286 self._debug = False 1287 self._stacktrace = False 1288 self._diagnostics = False 1289 self._actions = None 1290 self.actions = [] # initialize to an empty list; remainder are OK 1291 if argumentList is not None and argumentString is not None: 1292 raise ValueError("Use either argumentList or argumentString, but not both.") 1293 if argumentString is not None: 1294 argumentList = splitCommandLine(argumentString) 1295 if argumentList is not None: 1296 self._parseArgumentList(argumentList) 1297 if validate: 1298 self.validate()
    1299 1300 1301 ######################### 1302 # String representations 1303 ######################### 1304
    1305 - def __repr__(self):
    1306 """ 1307 Official string representation for class instance. 1308 """ 1309 return self.buildArgumentString(validate=False)
    1310
    1311 - def __str__(self):
    1312 """ 1313 Informal string representation for class instance. 1314 """ 1315 return self.__repr__()
    1316 1317 1318 ############################# 1319 # Standard comparison method 1320 ############################# 1321
    1322 - def __cmp__(self, other):
    1323 """ 1324 Definition of equals operator for this class. 1325 Lists within this class are "unordered" for equality comparisons. 1326 @param other: Other object to compare to. 1327 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1328 """ 1329 if other is None: 1330 return 1 1331 if self.help != other.help: 1332 if self.help < other.help: 1333 return -1 1334 else: 1335 return 1 1336 if self.version != other.version: 1337 if self.version < other.version: 1338 return -1 1339 else: 1340 return 1 1341 if self.verbose != other.verbose: 1342 if self.verbose < other.verbose: 1343 return -1 1344 else: 1345 return 1 1346 if self.quiet != other.quiet: 1347 if self.quiet < other.quiet: 1348 return -1 1349 else: 1350 return 1 1351 if self.config != other.config: 1352 if self.config < other.config: 1353 return -1 1354 else: 1355 return 1 1356 if self.full != other.full: 1357 if self.full < other.full: 1358 return -1 1359 else: 1360 return 1 1361 if self.managed != other.managed: 1362 if self.managed < other.managed: 1363 return -1 1364 else: 1365 return 1 1366 if self.managedOnly != other.managedOnly: 1367 if self.managedOnly < other.managedOnly: 1368 return -1 1369 else: 1370 return 1 1371 if self.logfile != other.logfile: 1372 if self.logfile < other.logfile: 1373 return -1 1374 else: 1375 return 1 1376 if self.owner != other.owner: 1377 if self.owner < other.owner: 1378 return -1 1379 else: 1380 return 1 1381 if self.mode != other.mode: 1382 if self.mode < other.mode: 1383 return -1 1384 else: 1385 return 1 1386 if self.output != other.output: 1387 if self.output < other.output: 1388 return -1 1389 else: 1390 return 1 1391 if self.debug != other.debug: 1392 if self.debug < other.debug: 1393 return -1 1394 else: 1395 return 1 1396 if self.stacktrace != other.stacktrace: 1397 if self.stacktrace < other.stacktrace: 1398 return -1 1399 else: 1400 return 1 1401 if self.diagnostics != other.diagnostics: 1402 if self.diagnostics < other.diagnostics: 1403 return -1 1404 else: 1405 return 1 1406 if self.actions != other.actions: 1407 if self.actions < other.actions: 1408 return -1 1409 else: 1410 return 1 1411 return 0
    1412 1413 1414 ############# 1415 # Properties 1416 ############# 1417
    1418 - def _setHelp(self, value):
    1419 """ 1420 Property target used to set the help flag. 1421 No validations, but we normalize the value to C{True} or C{False}. 1422 """ 1423 if value: 1424 self._help = True 1425 else: 1426 self._help = False
    1427
    1428 - def _getHelp(self):
    1429 """ 1430 Property target used to get the help flag. 1431 """ 1432 return self._help
    1433
    1434 - def _setVersion(self, value):
    1435 """ 1436 Property target used to set the version flag. 1437 No validations, but we normalize the value to C{True} or C{False}. 1438 """ 1439 if value: 1440 self._version = True 1441 else: 1442 self._version = False
    1443
    1444 - def _getVersion(self):
    1445 """ 1446 Property target used to get the version flag. 1447 """ 1448 return self._version
    1449
    1450 - def _setVerbose(self, value):
    1451 """ 1452 Property target used to set the verbose flag. 1453 No validations, but we normalize the value to C{True} or C{False}. 1454 """ 1455 if value: 1456 self._verbose = True 1457 else: 1458 self._verbose = False
    1459
    1460 - def _getVerbose(self):
    1461 """ 1462 Property target used to get the verbose flag. 1463 """ 1464 return self._verbose
    1465
    1466 - def _setQuiet(self, value):
    1467 """ 1468 Property target used to set the quiet flag. 1469 No validations, but we normalize the value to C{True} or C{False}. 1470 """ 1471 if value: 1472 self._quiet = True 1473 else: 1474 self._quiet = False
    1475
    1476 - def _getQuiet(self):
    1477 """ 1478 Property target used to get the quiet flag. 1479 """ 1480 return self._quiet
    1481
    1482 - def _setConfig(self, value):
    1483 """ 1484 Property target used to set the config parameter. 1485 """ 1486 if value is not None: 1487 if len(value) < 1: 1488 raise ValueError("The config parameter must be a non-empty string.") 1489 self._config = value
    1490
    1491 - def _getConfig(self):
    1492 """ 1493 Property target used to get the config parameter. 1494 """ 1495 return self._config
    1496
    1497 - def _setFull(self, value):
    1498 """ 1499 Property target used to set the full flag. 1500 No validations, but we normalize the value to C{True} or C{False}. 1501 """ 1502 if value: 1503 self._full = True 1504 else: 1505 self._full = False
    1506
    1507 - def _getFull(self):
    1508 """ 1509 Property target used to get the full flag. 1510 """ 1511 return self._full
    1512
    1513 - def _setManaged(self, value):
    1514 """ 1515 Property target used to set the managed flag. 1516 No validations, but we normalize the value to C{True} or C{False}. 1517 """ 1518 if value: 1519 self._managed = True 1520 else: 1521 self._managed = False
    1522
    1523 - def _getManaged(self):
    1524 """ 1525 Property target used to get the managed flag. 1526 """ 1527 return self._managed
    1528
    1529 - def _setManagedOnly(self, value):
    1530 """ 1531 Property target used to set the managedOnly flag. 1532 No validations, but we normalize the value to C{True} or C{False}. 1533 """ 1534 if value: 1535 self._managedOnly = True 1536 else: 1537 self._managedOnly = False
    1538
    1539 - def _getManagedOnly(self):
    1540 """ 1541 Property target used to get the managedOnly flag. 1542 """ 1543 return self._managedOnly
    1544
    1545 - def _setLogfile(self, value):
    1546 """ 1547 Property target used to set the logfile parameter. 1548 @raise ValueError: If the value cannot be encoded properly. 1549 """ 1550 if value is not None: 1551 if len(value) < 1: 1552 raise ValueError("The logfile parameter must be a non-empty string.") 1553 self._logfile = encodePath(value)
    1554
    1555 - def _getLogfile(self):
    1556 """ 1557 Property target used to get the logfile parameter. 1558 """ 1559 return self._logfile
    1560
    1561 - def _setOwner(self, value):
    1562 """ 1563 Property target used to set the owner parameter. 1564 If not C{None}, the owner must be a C{(user,group)} tuple or list. 1565 Strings (and inherited children of strings) are explicitly disallowed. 1566 The value will be normalized to a tuple. 1567 @raise ValueError: If the value is not valid. 1568 """ 1569 if value is None: 1570 self._owner = None 1571 else: 1572 if isinstance(value, str): 1573 raise ValueError("Must specify user and group tuple for owner parameter.") 1574 if len(value) != 2: 1575 raise ValueError("Must specify user and group tuple for owner parameter.") 1576 if len(value[0]) < 1 or len(value[1]) < 1: 1577 raise ValueError("User and group tuple values must be non-empty strings.") 1578 self._owner = (value[0], value[1])
    1579
    1580 - def _getOwner(self):
    1581 """ 1582 Property target used to get the owner parameter. 1583 The parameter is a tuple of C{(user, group)}. 1584 """ 1585 return self._owner
    1586
    1587 - def _setMode(self, value):
    1588 """ 1589 Property target used to set the mode parameter. 1590 """ 1591 if value is None: 1592 self._mode = None 1593 else: 1594 try: 1595 if isinstance(value, str): 1596 value = int(value, 8) 1597 else: 1598 value = int(value) 1599 except TypeError: 1600 raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") 1601 if value < 0: 1602 raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") 1603 self._mode = value
    1604
    1605 - def _getMode(self):
    1606 """ 1607 Property target used to get the mode parameter. 1608 """ 1609 return self._mode
    1610
    1611 - def _setOutput(self, value):
    1612 """ 1613 Property target used to set the output flag. 1614 No validations, but we normalize the value to C{True} or C{False}. 1615 """ 1616 if value: 1617 self._output = True 1618 else: 1619 self._output = False
    1620
    1621 - def _getOutput(self):
    1622 """ 1623 Property target used to get the output flag. 1624 """ 1625 return self._output
    1626
    1627 - def _setDebug(self, value):
    1628 """ 1629 Property target used to set the debug flag. 1630 No validations, but we normalize the value to C{True} or C{False}. 1631 """ 1632 if value: 1633 self._debug = True 1634 else: 1635 self._debug = False
    1636
    1637 - def _getDebug(self):
    1638 """ 1639 Property target used to get the debug flag. 1640 """ 1641 return self._debug
    1642
    1643 - def _setStacktrace(self, value):
    1644 """ 1645 Property target used to set the stacktrace flag. 1646 No validations, but we normalize the value to C{True} or C{False}. 1647 """ 1648 if value: 1649 self._stacktrace = True 1650 else: 1651 self._stacktrace = False
    1652
    1653 - def _getStacktrace(self):
    1654 """ 1655 Property target used to get the stacktrace flag. 1656 """ 1657 return self._stacktrace
    1658
    1659 - def _setDiagnostics(self, value):
    1660 """ 1661 Property target used to set the diagnostics flag. 1662 No validations, but we normalize the value to C{True} or C{False}. 1663 """ 1664 if value: 1665 self._diagnostics = True 1666 else: 1667 self._diagnostics = False
    1668
    1669 - def _getDiagnostics(self):
    1670 """ 1671 Property target used to get the diagnostics flag. 1672 """ 1673 return self._diagnostics
    1674
    1675 - def _setActions(self, value):
    1676 """ 1677 Property target used to set the actions list. 1678 We don't restrict the contents of actions. They're validated somewhere else. 1679 @raise ValueError: If the value is not valid. 1680 """ 1681 if value is None: 1682 self._actions = None 1683 else: 1684 try: 1685 saved = self._actions 1686 self._actions = [] 1687 self._actions.extend(value) 1688 except Exception, e: 1689 self._actions = saved 1690 raise e
    1691
    1692 - def _getActions(self):
    1693 """ 1694 Property target used to get the actions list. 1695 """ 1696 return self._actions
    1697 1698 help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") 1699 version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") 1700 verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") 1701 quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") 1702 config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") 1703 full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") 1704 managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") 1705 managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") 1706 logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") 1707 owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") 1708 mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") 1709 output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") 1710 debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") 1711 stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") 1712 diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") 1713 actions = property(_getActions, _setActions, None, "Command-line actions list.") 1714 1715 1716 ################## 1717 # Utility methods 1718 ################## 1719
    1720 - def validate(self):
    1721 """ 1722 Validates command-line options represented by the object. 1723 1724 Unless C{--help} or C{--version} are supplied, at least one action must 1725 be specified. Other validations (as for allowed values for particular 1726 options) will be taken care of at assignment time by the properties 1727 functionality. 1728 1729 @note: The command line format is specified by the L{_usage} function. 1730 Call L{_usage} to see a usage statement for the cback script. 1731 1732 @raise ValueError: If one of the validations fails. 1733 """ 1734 if not self.help and not self.version and not self.diagnostics: 1735 if self.actions is None or len(self.actions) == 0: 1736 raise ValueError("At least one action must be specified.") 1737 if self.managed and self.managedOnly: 1738 raise ValueError("The --managed and --managed-only options may not be combined.")
    1739
    1740 - def buildArgumentList(self, validate=True):
    1741 """ 1742 Extracts options into a list of command line arguments. 1743 1744 The original order of the various arguments (if, indeed, the object was 1745 initialized with a command-line) is not preserved in this generated 1746 argument list. Besides that, the argument list is normalized to use the 1747 long option names (i.e. --version rather than -V). The resulting list 1748 will be suitable for passing back to the constructor in the 1749 C{argumentList} parameter. Unlike L{buildArgumentString}, string 1750 arguments are not quoted here, because there is no need for it. 1751 1752 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1753 method will be called (with its default arguments) against the 1754 options before extracting the command line. If the options are not valid, 1755 then an argument list will not be extracted. 1756 1757 @note: It is strongly suggested that the C{validate} option always be set 1758 to C{True} (the default) unless there is a specific need to extract an 1759 invalid command line. 1760 1761 @param validate: Validate the options before extracting the command line. 1762 @type validate: Boolean true/false. 1763 1764 @return: List representation of command-line arguments. 1765 @raise ValueError: If options within the object are invalid. 1766 """ 1767 if validate: 1768 self.validate() 1769 argumentList = [] 1770 if self._help: 1771 argumentList.append("--help") 1772 if self.version: 1773 argumentList.append("--version") 1774 if self.verbose: 1775 argumentList.append("--verbose") 1776 if self.quiet: 1777 argumentList.append("--quiet") 1778 if self.config is not None: 1779 argumentList.append("--config") 1780 argumentList.append(self.config) 1781 if self.full: 1782 argumentList.append("--full") 1783 if self.managed: 1784 argumentList.append("--managed") 1785 if self.managedOnly: 1786 argumentList.append("--managed-only") 1787 if self.logfile is not None: 1788 argumentList.append("--logfile") 1789 argumentList.append(self.logfile) 1790 if self.owner is not None: 1791 argumentList.append("--owner") 1792 argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) 1793 if self.mode is not None: 1794 argumentList.append("--mode") 1795 argumentList.append("%o" % self.mode) 1796 if self.output: 1797 argumentList.append("--output") 1798 if self.debug: 1799 argumentList.append("--debug") 1800 if self.stacktrace: 1801 argumentList.append("--stack") 1802 if self.diagnostics: 1803 argumentList.append("--diagnostics") 1804 if self.actions is not None: 1805 for action in self.actions: 1806 argumentList.append(action) 1807 return argumentList
    1808
    1809 - def buildArgumentString(self, validate=True):
    1810 """ 1811 Extracts options into a string of command-line arguments. 1812 1813 The original order of the various arguments (if, indeed, the object was 1814 initialized with a command-line) is not preserved in this generated 1815 argument string. Besides that, the argument string is normalized to use 1816 the long option names (i.e. --version rather than -V) and to quote all 1817 string arguments with double quotes (C{"}). The resulting string will be 1818 suitable for passing back to the constructor in the C{argumentString} 1819 parameter. 1820 1821 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1822 method will be called (with its default arguments) against the options 1823 before extracting the command line. If the options are not valid, then 1824 an argument string will not be extracted. 1825 1826 @note: It is strongly suggested that the C{validate} option always be set 1827 to C{True} (the default) unless there is a specific need to extract an 1828 invalid command line. 1829 1830 @param validate: Validate the options before extracting the command line. 1831 @type validate: Boolean true/false. 1832 1833 @return: String representation of command-line arguments. 1834 @raise ValueError: If options within the object are invalid. 1835 """ 1836 if validate: 1837 self.validate() 1838 argumentString = "" 1839 if self._help: 1840 argumentString += "--help " 1841 if self.version: 1842 argumentString += "--version " 1843 if self.verbose: 1844 argumentString += "--verbose " 1845 if self.quiet: 1846 argumentString += "--quiet " 1847 if self.config is not None: 1848 argumentString += "--config \"%s\" " % self.config 1849 if self.full: 1850 argumentString += "--full " 1851 if self.managed: 1852 argumentString += "--managed " 1853 if self.managedOnly: 1854 argumentString += "--managed-only " 1855 if self.logfile is not None: 1856 argumentString += "--logfile \"%s\" " % self.logfile 1857 if self.owner is not None: 1858 argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) 1859 if self.mode is not None: 1860 argumentString += "--mode %o " % self.mode 1861 if self.output: 1862 argumentString += "--output " 1863 if self.debug: 1864 argumentString += "--debug " 1865 if self.stacktrace: 1866 argumentString += "--stack " 1867 if self.diagnostics: 1868 argumentString += "--diagnostics " 1869 if self.actions is not None: 1870 for action in self.actions: 1871 argumentString += "\"%s\" " % action 1872 return argumentString
    1873
    1874 - def _parseArgumentList(self, argumentList):
    1875 """ 1876 Internal method to parse a list of command-line arguments. 1877 1878 Most of the validation we do here has to do with whether the arguments 1879 can be parsed and whether any values which exist are valid. We don't do 1880 any validation as to whether required elements exist or whether elements 1881 exist in the proper combination (instead, that's the job of the 1882 L{validate} method). 1883 1884 For any of the options which supply parameters, if the option is 1885 duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) 1886 then the long switch is used. If the same option is duplicated with the 1887 same switch (long or short), then the last entry on the command line is 1888 used. 1889 1890 @param argumentList: List of arguments to a command. 1891 @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} 1892 1893 @raise ValueError: If the argument list cannot be successfully parsed. 1894 """ 1895 switches = { } 1896 opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) 1897 for o, a in opts: # push the switches into a hash 1898 switches[o] = a 1899 if switches.has_key("-h") or switches.has_key("--help"): 1900 self.help = True 1901 if switches.has_key("-V") or switches.has_key("--version"): 1902 self.version = True 1903 if switches.has_key("-b") or switches.has_key("--verbose"): 1904 self.verbose = True 1905 if switches.has_key("-q") or switches.has_key("--quiet"): 1906 self.quiet = True 1907 if switches.has_key("-c"): 1908 self.config = switches["-c"] 1909 if switches.has_key("--config"): 1910 self.config = switches["--config"] 1911 if switches.has_key("-f") or switches.has_key("--full"): 1912 self.full = True 1913 if switches.has_key("-M") or switches.has_key("--managed"): 1914 self.managed = True 1915 if switches.has_key("-N") or switches.has_key("--managed-only"): 1916 self.managedOnly = True 1917 if switches.has_key("-l"): 1918 self.logfile = switches["-l"] 1919 if switches.has_key("--logfile"): 1920 self.logfile = switches["--logfile"] 1921 if switches.has_key("-o"): 1922 self.owner = switches["-o"].split(":", 1) 1923 if switches.has_key("--owner"): 1924 self.owner = switches["--owner"].split(":", 1) 1925 if switches.has_key("-m"): 1926 self.mode = switches["-m"] 1927 if switches.has_key("--mode"): 1928 self.mode = switches["--mode"] 1929 if switches.has_key("-O") or switches.has_key("--output"): 1930 self.output = True 1931 if switches.has_key("-d") or switches.has_key("--debug"): 1932 self.debug = True 1933 if switches.has_key("-s") or switches.has_key("--stack"): 1934 self.stacktrace = True 1935 if switches.has_key("-D") or switches.has_key("--diagnostics"): 1936 self.diagnostics = True
    1937 1938 1939 ######################################################################### 1940 # Main routine 1941 ######################################################################## 1942 1943 if __name__ == "__main__": 1944 result = cli() 1945 sys.exit(result) 1946

    CedarBackup2-2.22.0/doc/interface/help.html0000664000175000017500000002603312143054362022074 0ustar pronovicpronovic00000000000000 Help
     
    [hide private]
    [frames] | no frames]

    API Documentation

    This document contains the API (Application Programming Interface) documentation for CedarBackup2. Documentation for the Python objects defined by the project is divided into separate pages for each package, module, and class. The API documentation also includes two pages containing information about the project as a whole: a trees page, and an index page.

    Object Documentation

    Each Package Documentation page contains:

    • A description of the package.
    • A list of the modules and sub-packages contained by the package.
    • A summary of the classes defined by the package.
    • A summary of the functions defined by the package.
    • A summary of the variables defined by the package.
    • A detailed description of each function defined by the package.
    • A detailed description of each variable defined by the package.

    Each Module Documentation page contains:

    • A description of the module.
    • A summary of the classes defined by the module.
    • A summary of the functions defined by the module.
    • A summary of the variables defined by the module.
    • A detailed description of each function defined by the module.
    • A detailed description of each variable defined by the module.

    Each Class Documentation page contains:

    • A class inheritance diagram.
    • A list of known subclasses.
    • A description of the class.
    • A summary of the methods defined by the class.
    • A summary of the instance variables defined by the class.
    • A summary of the class (static) variables defined by the class.
    • A detailed description of each method defined by the class.
    • A detailed description of each instance variable defined by the class.
    • A detailed description of each class (static) variable defined by the class.

    Project Documentation

    The Trees page contains the module and class hierarchies:

    • The module hierarchy lists every package and module, with modules grouped into packages. At the top level, and within each package, modules and sub-packages are listed alphabetically.
    • The class hierarchy lists every class, grouped by base class. If a class has more than one base class, then it will be listed under each base class. At the top level, and under each base class, classes are listed alphabetically.

    The Index page contains indices of terms and identifiers:

    • The term index lists every term indexed by any object's documentation. For each term, the index provides links to each place where the term is indexed.
    • The identifier index lists the (short) name of every package, module, class, method, function, variable, and parameter. For each identifier, the index provides a short description, and a link to its documentation.

    The Table of Contents

    The table of contents occupies the two frames on the left side of the window. The upper-left frame displays the project contents, and the lower-left frame displays the module contents:

    Project
    Contents
    ...
    API
    Documentation
    Frame


    Module
    Contents
     
    ...
     

    The project contents frame contains a list of all packages and modules that are defined by the project. Clicking on an entry will display its contents in the module contents frame. Clicking on a special entry, labeled "Everything," will display the contents of the entire project.

    The module contents frame contains a list of every submodule, class, type, exception, function, and variable defined by a module or package. Clicking on an entry will display its documentation in the API documentation frame. Clicking on the name of the module, at the top of the frame, will display the documentation for the module itself.

    The "frames" and "no frames" buttons below the top navigation bar can be used to control whether the table of contents is displayed or not.

    The Navigation Bar

    A navigation bar is located at the top and bottom of every page. It indicates what type of page you are currently viewing, and allows you to go to related pages. The following table describes the labels on the navigation bar. Note that not some labels (such as [Parent]) are not displayed on all pages.

    Label Highlighted when... Links to...
    [Parent] (never highlighted) the parent of the current package
    [Package] viewing a package the package containing the current object
    [Module] viewing a module the module containing the current object
    [Class] viewing a class the class containing the current object
    [Trees] viewing the trees page the trees page
    [Index] viewing the index page the index page
    [Help] viewing the help page the help page

    The "show private" and "hide private" buttons below the top navigation bar can be used to control whether documentation for private objects is displayed. Private objects are usually defined as objects whose (short) names begin with a single underscore, but do not end with an underscore. For example, "_x", "__pprint", and "epydoc.epytext._tokenize" are private objects; but "re.sub", "__init__", and "type_" are not. However, if a module defines the "__all__" variable, then its contents are used to decide which objects are private.

    A timestamp below the bottom navigation bar indicates when each page was last updated.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.subversion-module.html0000664000175000017500000001077212143054362031127 0ustar pronovicpronovic00000000000000 subversion

    Module subversion


    Classes

    BDBRepository
    FSFSRepository
    LocalConfig
    Repository
    RepositoryDir
    SubversionConfig

    Functions

    backupBDBRepository
    backupFSFSRepository
    backupRepository
    executeAction
    getYoungestRevision

    Variables

    REVISION_PATH_EXTENSION
    SVNADMIN_COMMAND
    SVNLOOK_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.util-pysrc.html0000664000175000017500000202377112143054365025517 0ustar pronovicpronovic00000000000000 CedarBackup2.util
    Package CedarBackup2 :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.util

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # Portions copyright (c) 2001, 2002 Python Software Foundation. 
      15  # All Rights Reserved. 
      16  # 
      17  # This program is free software; you can redistribute it and/or 
      18  # modify it under the terms of the GNU General Public License, 
      19  # Version 2, as published by the Free Software Foundation. 
      20  # 
      21  # This program is distributed in the hope that it will be useful, 
      22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      24  # 
      25  # Copies of the GNU General Public License are available from 
      26  # the Free Software Foundation website, http://www.gnu.org/. 
      27  # 
      28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      29  # 
      30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      31  # Language : Python (>= 2.5) 
      32  # Project  : Cedar Backup, release 2 
      33  # Revision : $Id: util.py 1042 2013-05-10 02:10:00Z pronovic $ 
      34  # Purpose  : Provides general-purpose utilities. 
      35  # 
      36  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      37   
      38  ######################################################################## 
      39  # Module documentation 
      40  ######################################################################## 
      41   
      42  """ 
      43  Provides general-purpose utilities.  
      44   
      45  @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, 
      46         RegexList, _Vertex, DirectedGraph, PathResolverSingleton,  
      47         sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, 
      48         resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, 
      49         deriveDayOfWeek, isStartOfWeek, buildNormalizedPath,  
      50         ISO_SECTOR_SIZE, BYTES_PER_SECTOR,  
      51         BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE,  
      52         SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY,  
      53         UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS 
      54   
      55  @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. 
      56  @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. 
      57  @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). 
      58  @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). 
      59  @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). 
      60  @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). 
      61  @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). 
      62  @var SECONDS_PER_MINUTE: Number of seconds per minute. 
      63  @var MINUTES_PER_HOUR: Number of minutes per hour. 
      64  @var HOURS_PER_DAY: Number of hours per day. 
      65  @var SECONDS_PER_DAY: Number of seconds per day. 
      66  @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. 
      67  @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. 
      68  @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. 
      69  @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. 
      70  @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. 
      71   
      72  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      73  """ 
      74   
      75   
      76  ######################################################################## 
      77  # Imported modules 
      78  ######################################################################## 
      79   
      80  import sys 
      81  import math 
      82  import os 
      83  import re 
      84  import time 
      85  import logging 
      86  import string  # pylint: disable=W0402 
      87  from subprocess import Popen, STDOUT, PIPE 
      88   
      89  from CedarBackup2.release import VERSION, DATE 
      90   
      91  try: 
      92     import pwd 
      93     import grp 
      94     _UID_GID_AVAILABLE = True    
      95  except ImportError: 
      96     _UID_GID_AVAILABLE = False    
      97   
      98   
      99  ######################################################################## 
     100  # Module-wide constants and variables 
     101  ######################################################################## 
     102   
     103  logger = logging.getLogger("CedarBackup2.log.util") 
     104  outputLogger = logging.getLogger("CedarBackup2.output") 
     105   
     106  ISO_SECTOR_SIZE    = 2048.0   # in bytes 
     107  BYTES_PER_SECTOR   = ISO_SECTOR_SIZE 
     108   
     109  BYTES_PER_KBYTE    = 1024.0 
     110  KBYTES_PER_MBYTE   = 1024.0 
     111  MBYTES_PER_GBYTE   = 1024.0 
     112  BYTES_PER_MBYTE    = BYTES_PER_KBYTE * KBYTES_PER_MBYTE 
     113  BYTES_PER_GBYTE    = BYTES_PER_MBYTE * MBYTES_PER_GBYTE 
     114   
     115  SECONDS_PER_MINUTE = 60.0 
     116  MINUTES_PER_HOUR   = 60.0 
     117  HOURS_PER_DAY      = 24.0 
     118  SECONDS_PER_DAY    = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY 
     119   
     120  UNIT_BYTES         = 0 
     121  UNIT_KBYTES        = 1 
     122  UNIT_MBYTES        = 2 
     123  UNIT_GBYTES        = 4 
     124  UNIT_SECTORS       = 3 
     125   
     126  MTAB_FILE          = "/etc/mtab" 
     127   
     128  MOUNT_COMMAND      = [ "mount", ] 
     129  UMOUNT_COMMAND     = [ "umount", ] 
     130   
     131  DEFAULT_LANGUAGE   = "C" 
     132  LANG_VAR           = "LANG" 
     133  LOCALE_VARS        = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", 
     134                         "LC_CTYPE", "LC_IDENTIFICATION",  
     135                         "LC_MEASUREMENT", "LC_MESSAGES",  
     136                         "LC_MONETARY", "LC_NAME", "LC_NUMERIC", 
     137                         "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] 
    
    138 139 140 ######################################################################## 141 # UnorderedList class definition 142 ######################################################################## 143 144 -class UnorderedList(list):
    145 146 """ 147 Class representing an "unordered list". 148 149 An "unordered list" is a list in which only the contents matter, not the 150 order in which the contents appear in the list. 151 152 For instance, we might be keeping track of set of paths in a list, because 153 it's convenient to have them in that form. However, for comparison 154 purposes, we would only care that the lists contain exactly the same 155 contents, regardless of order. 156 157 I have come up with two reasonable ways of doing this, plus a couple more 158 that would work but would be a pain to implement. My first method is to 159 copy and sort each list, comparing the sorted versions. This will only work 160 if two lists with exactly the same members are guaranteed to sort in exactly 161 the same order. The second way would be to create two Sets and then compare 162 the sets. However, this would lose information about any duplicates in 163 either list. I've decided to go with option #1 for now. I'll modify this 164 code if I run into problems in the future. 165 166 We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, 167 C{__le__} and C{__lt__} list methods to change the definition of the various 168 comparison operators. In all cases, the comparison is changed to return the 169 result of the original operation I{but instead comparing sorted lists}. 170 This is going to be quite a bit slower than a normal list, so you probably 171 only want to use it on small lists. 172 """ 173
    174 - def __eq__(self, other):
    175 """ 176 Definition of C{==} operator for this class. 177 @param other: Other object to compare to. 178 @return: True/false depending on whether C{self == other}. 179 """ 180 if other is None: 181 return False 182 selfSorted = self[:] 183 otherSorted = other[:] 184 selfSorted.sort() 185 otherSorted.sort() 186 return selfSorted.__eq__(otherSorted)
    187
    188 - def __ne__(self, other):
    189 """ 190 Definition of C{!=} operator for this class. 191 @param other: Other object to compare to. 192 @return: True/false depending on whether C{self != other}. 193 """ 194 if other is None: 195 return True 196 selfSorted = self[:] 197 otherSorted = other[:] 198 selfSorted.sort() 199 otherSorted.sort() 200 return selfSorted.__ne__(otherSorted)
    201
    202 - def __ge__(self, other):
    203 """ 204 Definition of S{>=} operator for this class. 205 @param other: Other object to compare to. 206 @return: True/false depending on whether C{self >= other}. 207 """ 208 if other is None: 209 return True 210 selfSorted = self[:] 211 otherSorted = other[:] 212 selfSorted.sort() 213 otherSorted.sort() 214 return selfSorted.__ge__(otherSorted)
    215
    216 - def __gt__(self, other):
    217 """ 218 Definition of C{>} operator for this class. 219 @param other: Other object to compare to. 220 @return: True/false depending on whether C{self > other}. 221 """ 222 if other is None: 223 return True 224 selfSorted = self[:] 225 otherSorted = other[:] 226 selfSorted.sort() 227 otherSorted.sort() 228 return selfSorted.__gt__(otherSorted)
    229
    230 - def __le__(self, other):
    231 """ 232 Definition of S{<=} operator for this class. 233 @param other: Other object to compare to. 234 @return: True/false depending on whether C{self <= other}. 235 """ 236 if other is None: 237 return False 238 selfSorted = self[:] 239 otherSorted = other[:] 240 selfSorted.sort() 241 otherSorted.sort() 242 return selfSorted.__le__(otherSorted)
    243
    244 - def __lt__(self, other):
    245 """ 246 Definition of C{<} operator for this class. 247 @param other: Other object to compare to. 248 @return: True/false depending on whether C{self < other}. 249 """ 250 if other is None: 251 return False 252 selfSorted = self[:] 253 otherSorted = other[:] 254 selfSorted.sort() 255 otherSorted.sort() 256 return selfSorted.__lt__(otherSorted)
    257
    258 259 ######################################################################## 260 # AbsolutePathList class definition 261 ######################################################################## 262 263 -class AbsolutePathList(UnorderedList):
    264 265 """ 266 Class representing a list of absolute paths. 267 268 This is an unordered list. 269 270 We override the C{append}, C{insert} and C{extend} methods to ensure that 271 any item added to the list is an absolute path. 272 273 Each item added to the list is encoded using L{encodePath}. If we don't do 274 this, we have problems trying certain operations between strings and unicode 275 objects, particularly for "odd" filenames that can't be encoded in standard 276 ASCII. 277 """ 278
    279 - def append(self, item):
    280 """ 281 Overrides the standard C{append} method. 282 @raise ValueError: If item is not an absolute path. 283 """ 284 if not os.path.isabs(item): 285 raise ValueError("Not an absolute path: [%s]" % item) 286 list.append(self, encodePath(item))
    287
    288 - def insert(self, index, item):
    289 """ 290 Overrides the standard C{insert} method. 291 @raise ValueError: If item is not an absolute path. 292 """ 293 if not os.path.isabs(item): 294 raise ValueError("Not an absolute path: [%s]" % item) 295 list.insert(self, index, encodePath(item))
    296
    297 - def extend(self, seq):
    298 """ 299 Overrides the standard C{insert} method. 300 @raise ValueError: If any item is not an absolute path. 301 """ 302 for item in seq: 303 if not os.path.isabs(item): 304 raise ValueError("Not an absolute path: [%s]" % item) 305 for item in seq: 306 list.append(self, encodePath(item))
    307
    308 309 ######################################################################## 310 # ObjectTypeList class definition 311 ######################################################################## 312 313 -class ObjectTypeList(UnorderedList):
    314 315 """ 316 Class representing a list containing only objects with a certain type. 317 318 This is an unordered list. 319 320 We override the C{append}, C{insert} and C{extend} methods to ensure that 321 any item added to the list matches the type that is requested. The 322 comparison uses the built-in C{isinstance}, which should allow subclasses of 323 of the requested type to be added to the list as well. 324 325 The C{objectName} value will be used in exceptions, i.e. C{"Item must be a 326 CollectDir object."} if C{objectName} is C{"CollectDir"}. 327 """ 328
    329 - def __init__(self, objectType, objectName):
    330 """ 331 Initializes a typed list for a particular type. 332 @param objectType: Type that the list elements must match. 333 @param objectName: Short string containing the "name" of the type. 334 """ 335 super(ObjectTypeList, self).__init__() 336 self.objectType = objectType 337 self.objectName = objectName
    338
    339 - def append(self, item):
    340 """ 341 Overrides the standard C{append} method. 342 @raise ValueError: If item does not match requested type. 343 """ 344 if not isinstance(item, self.objectType): 345 raise ValueError("Item must be a %s object." % self.objectName) 346 list.append(self, item)
    347
    348 - def insert(self, index, item):
    349 """ 350 Overrides the standard C{insert} method. 351 @raise ValueError: If item does not match requested type. 352 """ 353 if not isinstance(item, self.objectType): 354 raise ValueError("Item must be a %s object." % self.objectName) 355 list.insert(self, index, item)
    356
    357 - def extend(self, seq):
    358 """ 359 Overrides the standard C{insert} method. 360 @raise ValueError: If item does not match requested type. 361 """ 362 for item in seq: 363 if not isinstance(item, self.objectType): 364 raise ValueError("All items must be %s objects." % self.objectName) 365 list.extend(self, seq)
    366
    367 368 ######################################################################## 369 # RestrictedContentList class definition 370 ######################################################################## 371 372 -class RestrictedContentList(UnorderedList):
    373 374 """ 375 Class representing a list containing only object with certain values. 376 377 This is an unordered list. 378 379 We override the C{append}, C{insert} and C{extend} methods to ensure that 380 any item added to the list is among the valid values. We use a standard 381 comparison, so pretty much anything can be in the list of valid values. 382 383 The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be 384 one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. 385 386 @note: This class doesn't make any attempt to trap for nonsensical 387 arguments. All of the values in the values list should be of the same type 388 (i.e. strings). Then, all list operations also need to be of that type 389 (i.e. you should always insert or append just strings). If you mix types -- 390 for instance lists and strings -- you will likely see AttributeError 391 exceptions or other problems. 392 """ 393
    394 - def __init__(self, valuesList, valuesDescr, prefix=None):
    395 """ 396 Initializes a list restricted to containing certain values. 397 @param valuesList: List of valid values. 398 @param valuesDescr: Short string describing list of values. 399 @param prefix: Prefix to use in error messages (None results in prefix "Item") 400 """ 401 super(RestrictedContentList, self).__init__() 402 self.prefix = "Item" 403 if prefix is not None: self.prefix = prefix 404 self.valuesList = valuesList 405 self.valuesDescr = valuesDescr
    406
    407 - def append(self, item):
    408 """ 409 Overrides the standard C{append} method. 410 @raise ValueError: If item is not in the values list. 411 """ 412 if item not in self.valuesList: 413 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 414 list.append(self, item)
    415
    416 - def insert(self, index, item):
    417 """ 418 Overrides the standard C{insert} method. 419 @raise ValueError: If item is not in the values list. 420 """ 421 if item not in self.valuesList: 422 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 423 list.insert(self, index, item)
    424
    425 - def extend(self, seq):
    426 """ 427 Overrides the standard C{insert} method. 428 @raise ValueError: If item is not in the values list. 429 """ 430 for item in seq: 431 if item not in self.valuesList: 432 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 433 list.extend(self, seq)
    434
    435 436 ######################################################################## 437 # RegexMatchList class definition 438 ######################################################################## 439 440 -class RegexMatchList(UnorderedList):
    441 442 """ 443 Class representing a list containing only strings that match a regular expression. 444 445 If C{emptyAllowed} is passed in as C{False}, then empty strings are 446 explicitly disallowed, even if they happen to match the regular expression. 447 (C{None} values are always disallowed, since string operations are not 448 permitted on C{None}.) 449 450 This is an unordered list. 451 452 We override the C{append}, C{insert} and C{extend} methods to ensure that 453 any item added to the list matches the indicated regular expression. 454 455 @note: If you try to put values that are not strings into the list, you will 456 likely get either TypeError or AttributeError exceptions as a result. 457 """ 458
    459 - def __init__(self, valuesRegex, emptyAllowed=True, prefix=None):
    460 """ 461 Initializes a list restricted to containing certain values. 462 @param valuesRegex: Regular expression that must be matched, as a string 463 @param emptyAllowed: Indicates whether empty or None values are allowed. 464 @param prefix: Prefix to use in error messages (None results in prefix "Item") 465 """ 466 super(RegexMatchList, self).__init__() 467 self.prefix = "Item" 468 if prefix is not None: self.prefix = prefix 469 self.valuesRegex = valuesRegex 470 self.emptyAllowed = emptyAllowed 471 self.pattern = re.compile(self.valuesRegex)
    472
    473 - def append(self, item):
    474 """ 475 Overrides the standard C{append} method. 476 @raise ValueError: If item is None 477 @raise ValueError: If item is empty and empty values are not allowed 478 @raise ValueError: If item does not match the configured regular expression 479 """ 480 if item is None or (not self.emptyAllowed and item == ""): 481 raise ValueError("%s cannot be empty." % self.prefix) 482 if not self.pattern.search(item): 483 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 484 list.append(self, item)
    485
    486 - def insert(self, index, item):
    487 """ 488 Overrides the standard C{insert} method. 489 @raise ValueError: If item is None 490 @raise ValueError: If item is empty and empty values are not allowed 491 @raise ValueError: If item does not match the configured regular expression 492 """ 493 if item is None or (not self.emptyAllowed and item == ""): 494 raise ValueError("%s cannot be empty." % self.prefix) 495 if not self.pattern.search(item): 496 raise ValueError("%s is not valid [%s]" % (self.prefix, item)) 497 list.insert(self, index, item)
    498
    499 - def extend(self, seq):
    500 """ 501 Overrides the standard C{insert} method. 502 @raise ValueError: If any item is None 503 @raise ValueError: If any item is empty and empty values are not allowed 504 @raise ValueError: If any item does not match the configured regular expression 505 """ 506 for item in seq: 507 if item is None or (not self.emptyAllowed and item == ""): 508 raise ValueError("%s cannot be empty.", self.prefix) 509 if not self.pattern.search(item): 510 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 511 list.extend(self, seq)
    512
    513 514 ######################################################################## 515 # RegexList class definition 516 ######################################################################## 517 518 -class RegexList(UnorderedList):
    519 520 """ 521 Class representing a list of valid regular expression strings. 522 523 This is an unordered list. 524 525 We override the C{append}, C{insert} and C{extend} methods to ensure that 526 any item added to the list is a valid regular expression. 527 """ 528
    529 - def append(self, item):
    530 """ 531 Overrides the standard C{append} method. 532 @raise ValueError: If item is not an absolute path. 533 """ 534 try: 535 re.compile(item) 536 except re.error: 537 raise ValueError("Not a valid regular expression: [%s]" % item) 538 list.append(self, item)
    539
    540 - def insert(self, index, item):
    541 """ 542 Overrides the standard C{insert} method. 543 @raise ValueError: If item is not an absolute path. 544 """ 545 try: 546 re.compile(item) 547 except re.error: 548 raise ValueError("Not a valid regular expression: [%s]" % item) 549 list.insert(self, index, item)
    550
    551 - def extend(self, seq):
    552 """ 553 Overrides the standard C{insert} method. 554 @raise ValueError: If any item is not an absolute path. 555 """ 556 for item in seq: 557 try: 558 re.compile(item) 559 except re.error: 560 raise ValueError("Not a valid regular expression: [%s]" % item) 561 for item in seq: 562 list.append(self, item)
    563
    564 565 ######################################################################## 566 # Directed graph implementation 567 ######################################################################## 568 569 -class _Vertex(object):
    570 571 """ 572 Represents a vertex (or node) in a directed graph. 573 """ 574
    575 - def __init__(self, name):
    576 """ 577 Constructor. 578 @param name: Name of this graph vertex. 579 @type name: String value. 580 """ 581 self.name = name 582 self.endpoints = [] 583 self.state = None
    584
    585 -class DirectedGraph(object):
    586 587 """ 588 Represents a directed graph. 589 590 A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set 591 B{E} of vertex pairs or edges. In a directed graph, each edge also has an 592 associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} 593 object provides a way to construct a directed graph and execute a depth- 594 first search. 595 596 This data structure was designed based on the graphing chapter in 597 U{The Algorithm Design Manual<http://www2.toki.or.id/book/AlgDesignManual/>}, 598 by Steven S. Skiena. 599 600 This class is intended to be used by Cedar Backup for dependency ordering. 601 Because of this, it's not quite general-purpose. Unlike a "general" graph, 602 every vertex in this graph has at least one edge pointing to it, from a 603 special "start" vertex. This is so no vertices get "lost" either because 604 they have no dependencies or because nothing depends on them. 605 """ 606 607 _UNDISCOVERED = 0 608 _DISCOVERED = 1 609 _EXPLORED = 2 610
    611 - def __init__(self, name):
    612 """ 613 Directed graph constructor. 614 615 @param name: Name of this graph. 616 @type name: String value. 617 """ 618 if name is None or name == "": 619 raise ValueError("Graph name must be non-empty.") 620 self._name = name 621 self._vertices = {} 622 self._startVertex = _Vertex(None) # start vertex is only vertex with no name
    623
    624 - def __repr__(self):
    625 """ 626 Official string representation for class instance. 627 """ 628 return "DirectedGraph(%s)" % self.name
    629
    630 - def __str__(self):
    631 """ 632 Informal string representation for class instance. 633 """ 634 return self.__repr__()
    635
    636 - def __cmp__(self, other):
    637 """ 638 Definition of equals operator for this class. 639 @param other: Other object to compare to. 640 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 641 """ 642 # pylint: disable=W0212 643 if other is None: 644 return 1 645 if self.name != other.name: 646 if self.name < other.name: 647 return -1 648 else: 649 return 1 650 if self._vertices != other._vertices: 651 if self._vertices < other._vertices: 652 return -1 653 else: 654 return 1 655 return 0
    656
    657 - def _getName(self):
    658 """ 659 Property target used to get the graph name. 660 """ 661 return self._name
    662 663 name = property(_getName, None, None, "Name of the graph.") 664
    665 - def createVertex(self, name):
    666 """ 667 Creates a named vertex. 668 @param name: vertex name 669 @raise ValueError: If the vertex name is C{None} or empty. 670 """ 671 if name is None or name == "": 672 raise ValueError("Vertex name must be non-empty.") 673 vertex = _Vertex(name) 674 self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once 675 self._vertices[name] = vertex
    676
    677 - def createEdge(self, start, finish):
    678 """ 679 Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. 680 @param start: Name of start vertex. 681 @param finish: Name of finish vertex. 682 @raise ValueError: If one of the named vertices is unknown. 683 """ 684 try: 685 startVertex = self._vertices[start] 686 finishVertex = self._vertices[finish] 687 startVertex.endpoints.append(finishVertex) 688 except KeyError, e: 689 raise ValueError("Vertex [%s] could not be found." % e)
    690
    691 - def topologicalSort(self):
    692 """ 693 Implements a topological sort of the graph. 694 695 This method also enforces that the graph is a directed acyclic graph, 696 which is a requirement of a topological sort. 697 698 A directed acyclic graph (or "DAG") is a directed graph with no directed 699 cycles. A topological sort of a DAG is an ordering on the vertices such 700 that all edges go from left to right. Only an acyclic graph can have a 701 topological sort, but any DAG has at least one topological sort. 702 703 Since a topological sort only makes sense for an acyclic graph, this 704 method throws an exception if a cycle is found. 705 706 A depth-first search only makes sense if the graph is acyclic. If the 707 graph contains any cycles, it is not possible to determine a consistent 708 ordering for the vertices. 709 710 @note: If a particular vertex has no edges, then its position in the 711 final list depends on the order in which the vertices were created in the 712 graph. If you're using this method to determine a dependency order, this 713 makes sense: a vertex with no dependencies can go anywhere (and will). 714 715 @return: Ordering on the vertices so that all edges go from left to right. 716 717 @raise ValueError: If a cycle is found in the graph. 718 """ 719 ordering = [] 720 for key in self._vertices: 721 vertex = self._vertices[key] 722 vertex.state = self._UNDISCOVERED 723 for key in self._vertices: 724 vertex = self._vertices[key] 725 if vertex.state == self._UNDISCOVERED: 726 self._topologicalSort(self._startVertex, ordering) 727 return ordering
    728
    729 - def _topologicalSort(self, vertex, ordering):
    730 """ 731 Recursive depth first search function implementing topological sort. 732 @param vertex: Vertex to search 733 @param ordering: List of vertices in proper order 734 """ 735 vertex.state = self._DISCOVERED 736 for endpoint in vertex.endpoints: 737 if endpoint.state == self._UNDISCOVERED: 738 self._topologicalSort(endpoint, ordering) 739 elif endpoint.state != self._EXPLORED: 740 raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) 741 if vertex.name is not None: 742 ordering.insert(0, vertex.name) 743 vertex.state = self._EXPLORED
    744
    745 746 ######################################################################## 747 # PathResolverSingleton class definition 748 ######################################################################## 749 750 -class PathResolverSingleton(object):
    751 752 """ 753 Singleton used for resolving executable paths. 754 755 Various functions throughout Cedar Backup (including extensions) need a way 756 to resolve the path of executables that they use. For instance, the image 757 functionality needs to find the C{mkisofs} executable, and the Subversion 758 extension needs to find the C{svnlook} executable. Cedar Backup's original 759 behavior was to assume that the simple name (C{"svnlook"} or whatever) was 760 available on the caller's C{$PATH}, and to fail otherwise. However, this 761 turns out to be less than ideal, since for instance the root user might not 762 always have executables like C{svnlook} in its path. 763 764 One solution is to specify a path (either via an absolute path or some sort 765 of path insertion or path appending mechanism) that would apply to the 766 C{executeCommand()} function. This is not difficult to implement, but it 767 seem like kind of a "big hammer" solution. Besides that, it might also 768 represent a security flaw (for instance, I prefer not to mess with root's 769 C{$PATH} on the application level if I don't have to). 770 771 The alternative is to set up some sort of configuration for the path to 772 certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or 773 whatever. This PathResolverSingleton aims to provide a good solution to the 774 mapping problem. Callers of all sorts (extensions or not) can get an 775 instance of the singleton. Then, they call the C{lookup} method to try and 776 resolve the executable they are looking for. Through the C{lookup} method, 777 the caller can also specify a default to use if a mapping is not found. 778 This way, with no real effort on the part of the caller, behavior can neatly 779 degrade to something equivalent to the current behavior if there is no 780 special mapping or if the singleton was never initialized in the first 781 place. 782 783 Even better, extensions automagically get access to the same resolver 784 functionality, and they don't even need to understand how the mapping 785 happens. All extension authors need to do is document what executables 786 their code requires, and the standard resolver configuration section will 787 meet their needs. 788 789 The class should be initialized once through the constructor somewhere in 790 the main routine. Then, the main routine should call the L{fill} method to 791 fill in the resolver's internal structures. Everyone else who needs to 792 resolve a path will get an instance of the class using L{getInstance} and 793 will then just call the L{lookup} method. 794 795 @cvar _instance: Holds a reference to the singleton 796 @ivar _mapping: Internal mapping from resource name to path. 797 """ 798 799 _instance = None # Holds a reference to singleton instance 800
    801 - class _Helper:
    802 """Helper class to provide a singleton factory method."""
    803 - def __init__(self):
    804 pass
    805 - def __call__(self, *args, **kw):
    806 # pylint: disable=W0212,R0201 807 if PathResolverSingleton._instance is None: 808 obj = PathResolverSingleton() 809 PathResolverSingleton._instance = obj 810 return PathResolverSingleton._instance
    811 812 getInstance = _Helper() # Method that callers will use to get an instance 813
    814 - def __init__(self):
    815 """Singleton constructor, which just creates the singleton instance.""" 816 if PathResolverSingleton._instance is not None: 817 raise RuntimeError("Only one instance of PathResolverSingleton is allowed!") 818 PathResolverSingleton._instance = self 819 self._mapping = { }
    820
    821 - def lookup(self, name, default=None):
    822 """ 823 Looks up name and returns the resolved path associated with the name. 824 @param name: Name of the path resource to resolve. 825 @param default: Default to return if resource cannot be resolved. 826 @return: Resolved path associated with name, or default if name can't be resolved. 827 """ 828 value = default 829 if name in self._mapping.keys(): 830 value = self._mapping[name] 831 logger.debug("Resolved command [%s] to [%s]." % (name, value)) 832 return value
    833
    834 - def fill(self, mapping):
    835 """ 836 Fills in the singleton's internal mapping from name to resource. 837 @param mapping: Mapping from resource name to path. 838 @type mapping: Dictionary mapping name to path, both as strings. 839 """ 840 self._mapping = { } 841 for key in mapping.keys(): 842 self._mapping[key] = mapping[key]
    843
    844 845 ######################################################################## 846 # Pipe class definition 847 ######################################################################## 848 849 -class Pipe(Popen):
    850 """ 851 Specialized pipe class for use by C{executeCommand}. 852 853 The L{executeCommand} function needs a specialized way of interacting 854 with a pipe. First, C{executeCommand} only reads from the pipe, and 855 never writes to it. Second, C{executeCommand} needs a way to discard all 856 output written to C{stderr}, as a means of simulating the shell 857 C{2>/dev/null} construct. 858 """
    859 - def __init__(self, cmd, bufsize=-1, ignoreStderr=False):
    860 stderr = STDOUT 861 if ignoreStderr: 862 devnull = nullDevice() 863 stderr = os.open(devnull, os.O_RDWR) 864 Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr)
    865
    866 867 ######################################################################## 868 # Diagnostics class definition 869 ######################################################################## 870 871 -class Diagnostics(object):
    872 873 """ 874 Class holding runtime diagnostic information. 875 876 Diagnostic information is information that is useful to get from users for 877 debugging purposes. I'm consolidating it all here into one object. 878 879 @sort: __init__, __repr__, __str__ 880 """ 881 # pylint: disable=R0201 882
    883 - def __init__(self):
    884 """ 885 Constructor for the C{Diagnostics} class. 886 """
    887
    888 - def __repr__(self):
    889 """ 890 Official string representation for class instance. 891 """ 892 return "Diagnostics()"
    893
    894 - def __str__(self):
    895 """ 896 Informal string representation for class instance. 897 """ 898 return self.__repr__()
    899
    900 - def getValues(self):
    901 """ 902 Get a map containing all of the diagnostic values. 903 @return: Map from diagnostic name to diagnostic value. 904 """ 905 values = {} 906 values['version'] = self.version 907 values['interpreter'] = self.interpreter 908 values['platform'] = self.platform 909 values['encoding'] = self.encoding 910 values['locale'] = self.locale 911 values['timestamp'] = self.timestamp 912 return values
    913
    914 - def printDiagnostics(self, fd=sys.stdout, prefix=""):
    915 """ 916 Pretty-print diagnostic information to a file descriptor. 917 @param fd: File descriptor used to print information. 918 @param prefix: Prefix string (if any) to place onto printed lines 919 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 920 """ 921 lines = self._buildDiagnosticLines(prefix) 922 for line in lines: 923 fd.write("%s\n" % line)
    924
    925 - def logDiagnostics(self, method, prefix=""):
    926 """ 927 Pretty-print diagnostic information using a logger method. 928 @param method: Logger method to use for logging (i.e. logger.info) 929 @param prefix: Prefix string (if any) to place onto printed lines 930 """ 931 lines = self._buildDiagnosticLines(prefix) 932 for line in lines: 933 method("%s" % line)
    934
    935 - def _buildDiagnosticLines(self, prefix=""):
    936 """ 937 Build a set of pretty-printed diagnostic lines. 938 @param prefix: Prefix string (if any) to place onto printed lines 939 @return: List of strings, not terminated by newlines. 940 """ 941 values = self.getValues() 942 keys = values.keys() 943 keys.sort() 944 tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output 945 lines = [] 946 for key in keys: 947 title = key.title() 948 title += (tmax - len(title)) * '.' 949 value = values[key] 950 line = "%s%s: %s" % (prefix, title, value) 951 lines.append(line) 952 return lines
    953 954 @staticmethod
    955 - def _getMaxLength(values):
    956 """ 957 Get the maximum length from among a list of strings. 958 """ 959 tmax = 0 960 for value in values: 961 if len(value) > tmax: 962 tmax = len(value) 963 return tmax
    964
    965 - def _getVersion(self):
    966 """ 967 Property target to get the Cedar Backup version. 968 """ 969 return "Cedar Backup %s (%s)" % (VERSION, DATE)
    970
    971 - def _getInterpreter(self):
    972 """ 973 Property target to get the Python interpreter version. 974 """ 975 version = sys.version_info 976 return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3])
    977
    978 - def _getEncoding(self):
    979 """ 980 Property target to get the filesystem encoding. 981 """ 982 return sys.getfilesystemencoding() or sys.getdefaultencoding()
    983
    984 - def _getPlatform(self):
    985 """ 986 Property target to get the operating system platform. 987 """ 988 try: 989 if sys.platform.startswith("win"): 990 windowsPlatforms = [ "Windows 3.1", "Windows 95/98/ME", "Windows NT/2000/XP", "Windows CE", ] 991 wininfo = sys.getwindowsversion() # pylint: disable=E1101 992 winversion = "%d.%d.%d" % (wininfo[0], wininfo[1], wininfo[2]) 993 winplatform = windowsPlatforms[wininfo[3]] 994 wintext = wininfo[4] # i.e. "Service Pack 2" 995 return "%s (%s %s %s)" % (sys.platform, winplatform, winversion, wintext) 996 else: 997 uname = os.uname() 998 sysname = uname[0] # i.e. Linux 999 release = uname[2] # i.e. 2.16.18-2 1000 machine = uname[4] # i.e. i686 1001 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) 1002 except: 1003 return sys.platform
    1004
    1005 - def _getLocale(self):
    1006 """ 1007 Property target to get the default locale that is in effect. 1008 """ 1009 try: 1010 import locale 1011 return locale.getdefaultlocale()[0] 1012 except: 1013 return "(unknown)"
    1014
    1015 - def _getTimestamp(self):
    1016 """ 1017 Property target to get a current date/time stamp. 1018 """ 1019 try: 1020 import datetime 1021 return datetime.datetime.utcnow().ctime() + " UTC" 1022 except: 1023 return "(unknown)"
    1024 1025 version = property(_getVersion, None, None, "Cedar Backup version.") 1026 interpreter = property(_getInterpreter, None, None, "Python interpreter version.") 1027 platform = property(_getPlatform, None, None, "Platform identifying information.") 1028 encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") 1029 locale = property(_getLocale, None, None, "Locale that is in effect.") 1030 timestamp = property(_getTimestamp, None, None, "Current timestamp.")
    1031
    1032 1033 ######################################################################## 1034 # General utility functions 1035 ######################################################################## 1036 1037 ###################### 1038 # sortDict() function 1039 ###################### 1040 1041 -def sortDict(d):
    1042 """ 1043 Returns the keys of the dictionary sorted by value. 1044 1045 There are cuter ways to do this in Python 2.4, but we were originally 1046 attempting to stay compatible with Python 2.3. 1047 1048 @param d: Dictionary to operate on 1049 @return: List of dictionary keys sorted in order by dictionary value. 1050 """ 1051 items = d.items() 1052 items.sort(lambda x, y: cmp(x[1], y[1])) 1053 return [key for key, value in items]
    1054
    1055 1056 ######################## 1057 # removeKeys() function 1058 ######################## 1059 1060 -def removeKeys(d, keys):
    1061 """ 1062 Removes all of the keys from the dictionary. 1063 The dictionary is altered in-place. 1064 Each key must exist in the dictionary. 1065 @param d: Dictionary to operate on 1066 @param keys: List of keys to remove 1067 @raise KeyError: If one of the keys does not exist 1068 """ 1069 for key in keys: 1070 del d[key]
    1071
    1072 1073 ######################### 1074 # convertSize() function 1075 ######################### 1076 1077 -def convertSize(size, fromUnit, toUnit):
    1078 """ 1079 Converts a size in one unit to a size in another unit. 1080 1081 This is just a convenience function so that the functionality can be 1082 implemented in just one place. Internally, we convert values to bytes and 1083 then to the final unit. 1084 1085 The available units are: 1086 1087 - C{UNIT_BYTES} - Bytes 1088 - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B 1089 - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB 1090 - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB 1091 - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B 1092 1093 @param size: Size to convert 1094 @type size: Integer or float value in units of C{fromUnit} 1095 1096 @param fromUnit: Unit to convert from 1097 @type fromUnit: One of the units listed above 1098 1099 @param toUnit: Unit to convert to 1100 @type toUnit: One of the units listed above 1101 1102 @return: Number converted to new unit, as a float. 1103 @raise ValueError: If one of the units is invalid. 1104 """ 1105 if size is None: 1106 raise ValueError("Cannot convert size of None.") 1107 if fromUnit == UNIT_BYTES: 1108 byteSize = float(size) 1109 elif fromUnit == UNIT_KBYTES: 1110 byteSize = float(size) * BYTES_PER_KBYTE 1111 elif fromUnit == UNIT_MBYTES: 1112 byteSize = float(size) * BYTES_PER_MBYTE 1113 elif fromUnit == UNIT_GBYTES: 1114 byteSize = float(size) * BYTES_PER_GBYTE 1115 elif fromUnit == UNIT_SECTORS: 1116 byteSize = float(size) * BYTES_PER_SECTOR 1117 else: 1118 raise ValueError("Unknown 'from' unit %s." % fromUnit) 1119 if toUnit == UNIT_BYTES: 1120 return byteSize 1121 elif toUnit == UNIT_KBYTES: 1122 return byteSize / BYTES_PER_KBYTE 1123 elif toUnit == UNIT_MBYTES: 1124 return byteSize / BYTES_PER_MBYTE 1125 elif toUnit == UNIT_GBYTES: 1126 return byteSize / BYTES_PER_GBYTE 1127 elif toUnit == UNIT_SECTORS: 1128 return byteSize / BYTES_PER_SECTOR 1129 else: 1130 raise ValueError("Unknown 'to' unit %s." % toUnit)
    1131
    1132 1133 ########################## 1134 # displayBytes() function 1135 ########################## 1136 1137 -def displayBytes(bytes, digits=2): # pylint: disable=W0622
    1138 """ 1139 Format a byte quantity so it can be sensibly displayed. 1140 1141 It's rather difficult to look at a number like "72372224 bytes" and get any 1142 meaningful information out of it. It would be more useful to see something 1143 like "69.02 MB". That's what this function does. Any time you want to display 1144 a byte value, i.e.:: 1145 1146 print "Size: %s bytes" % bytes 1147 1148 Call this function instead:: 1149 1150 print "Size: %s" % displayBytes(bytes) 1151 1152 What comes out will be sensibly formatted. The indicated number of digits 1153 will be listed after the decimal point, rounded based on whatever rules are 1154 used by Python's standard C{%f} string format specifier. (Values less than 1 1155 kB will be listed in bytes and will not have a decimal point, since the 1156 concept of a fractional byte is nonsensical.) 1157 1158 @param bytes: Byte quantity. 1159 @type bytes: Integer number of bytes. 1160 1161 @param digits: Number of digits to display after the decimal point. 1162 @type digits: Integer value, typically 2-5. 1163 1164 @return: String, formatted for sensible display. 1165 """ 1166 if(bytes is None): 1167 raise ValueError("Cannot display byte value of None.") 1168 bytes = float(bytes) 1169 if math.fabs(bytes) < BYTES_PER_KBYTE: 1170 fmt = "%.0f bytes" 1171 value = bytes 1172 elif math.fabs(bytes) < BYTES_PER_MBYTE: 1173 fmt = "%." + "%d" % digits + "f kB" 1174 value = bytes / BYTES_PER_KBYTE 1175 elif math.fabs(bytes) < BYTES_PER_GBYTE: 1176 fmt = "%." + "%d" % digits + "f MB" 1177 value = bytes / BYTES_PER_MBYTE 1178 else: 1179 fmt = "%." + "%d" % digits + "f GB" 1180 value = bytes / BYTES_PER_GBYTE 1181 return fmt % value 1182
    1183 1184 ################################## 1185 # getFunctionReference() function 1186 ################################## 1187 1188 -def getFunctionReference(module, function):
    1189 """ 1190 Gets a reference to a named function. 1191 1192 This does some hokey-pokey to get back a reference to a dynamically named 1193 function. For instance, say you wanted to get a reference to the 1194 C{os.path.isdir} function. You could use:: 1195 1196 myfunc = getFunctionReference("os.path", "isdir") 1197 1198 Although we won't bomb out directly, behavior is pretty much undefined if 1199 you pass in C{None} or C{""} for either C{module} or C{function}. 1200 1201 The only validation we enforce is that whatever we get back must be 1202 callable. 1203 1204 I derived this code based on the internals of the Python unittest 1205 implementation. I don't claim to completely understand how it works. 1206 1207 @param module: Name of module associated with function. 1208 @type module: Something like "os.path" or "CedarBackup2.util" 1209 1210 @param function: Name of function 1211 @type function: Something like "isdir" or "getUidGid" 1212 1213 @return: Reference to function associated with name. 1214 1215 @raise ImportError: If the function cannot be found. 1216 @raise ValueError: If the resulting reference is not callable. 1217 1218 @copyright: Some of this code, prior to customization, was originally part 1219 of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python 1220 Software Foundation; All Rights Reserved. 1221 """ 1222 parts = [] 1223 if module is not None and module != "": 1224 parts = module.split(".") 1225 if function is not None and function != "": 1226 parts.append(function) 1227 copy = parts[:] 1228 while copy: 1229 try: 1230 module = __import__(string.join(copy, ".")) 1231 break 1232 except ImportError: 1233 del copy[-1] 1234 if not copy: raise 1235 parts = parts[1:] 1236 obj = module 1237 for part in parts: 1238 obj = getattr(obj, part) 1239 if not callable(obj): 1240 raise ValueError("Reference to %s.%s is not callable." % (module, function)) 1241 return obj
    1242
    1243 1244 ####################### 1245 # getUidGid() function 1246 ####################### 1247 1248 -def getUidGid(user, group):
    1249 """ 1250 Get the uid/gid associated with a user/group pair 1251 1252 This is a no-op if user/group functionality is not available on the platform. 1253 1254 @param user: User name 1255 @type user: User name as a string 1256 1257 @param group: Group name 1258 @type group: Group name as a string 1259 1260 @return: Tuple C{(uid, gid)} matching passed-in user and group. 1261 @raise ValueError: If the ownership user/group values are invalid 1262 """ 1263 if _UID_GID_AVAILABLE: 1264 try: 1265 uid = pwd.getpwnam(user)[2] 1266 gid = grp.getgrnam(group)[2] 1267 return (uid, gid) 1268 except Exception, e: 1269 logger.debug("Error looking up uid and gid for [%s:%s]: %s" % (user, group, e)) 1270 raise ValueError("Unable to lookup up uid and gid for passed in user/group.") 1271 else: 1272 return (0, 0)
    1273
    1274 1275 ############################# 1276 # changeOwnership() function 1277 ############################# 1278 1279 -def changeOwnership(path, user, group):
    1280 """ 1281 Changes ownership of path to match the user and group. 1282 1283 This is a no-op if user/group functionality is not available on the 1284 platform, or if the either passed-in user or group is C{None}. Further, we 1285 won't even try to do it unless running as root, since it's unlikely to work. 1286 1287 @param path: Path whose ownership to change. 1288 @param user: User which owns file. 1289 @param group: Group which owns file. 1290 """ 1291 if _UID_GID_AVAILABLE: 1292 if user is None or group is None: 1293 logger.debug("User or group is None, so not attempting to change owner on [%s]." % path) 1294 elif not isRunningAsRoot(): 1295 logger.debug("Not root, so not attempting to change owner on [%s]." % path) 1296 else: 1297 try: 1298 (uid, gid) = getUidGid(user, group) 1299 os.chown(path, uid, gid) 1300 except Exception, e: 1301 logger.error("Error changing ownership of [%s]: %s" % (path, e))
    1302
    1303 1304 ############################# 1305 # isRunningAsRoot() function 1306 ############################# 1307 1308 -def isRunningAsRoot():
    1309 """ 1310 Indicates whether the program is running as the root user. 1311 """ 1312 return os.getuid() == 0
    1313
    1314 1315 ############################## 1316 # splitCommandLine() function 1317 ############################## 1318 1319 -def splitCommandLine(commandLine):
    1320 """ 1321 Splits a command line string into a list of arguments. 1322 1323 Unfortunately, there is no "standard" way to parse a command line string, 1324 and it's actually not an easy problem to solve portably (essentially, we 1325 have to emulate the shell argument-processing logic). This code only 1326 respects double quotes (C{"}) for grouping arguments, not single quotes 1327 (C{'}). Make sure you take this into account when building your command 1328 line. 1329 1330 Incidentally, I found this particular parsing method while digging around in 1331 Google Groups, and I tweaked it for my own use. 1332 1333 @param commandLine: Command line string 1334 @type commandLine: String, i.e. "cback --verbose stage store" 1335 1336 @return: List of arguments, suitable for passing to C{popen2}. 1337 1338 @raise ValueError: If the command line is None. 1339 """ 1340 if commandLine is None: 1341 raise ValueError("Cannot split command line of None.") 1342 fields = re.findall('[^ "]+|"[^"]+"', commandLine) 1343 fields = map(lambda field: field.replace('"', ''), fields) 1344 return fields
    1345
    1346 1347 ############################ 1348 # resolveCommand() function 1349 ############################ 1350 1351 -def resolveCommand(command):
    1352 """ 1353 Resolves the real path to a command through the path resolver mechanism. 1354 1355 Both extensions and standard Cedar Backup functionality need a way to 1356 resolve the "real" location of various executables. Normally, they assume 1357 that these executables are on the system path, but some callers need to 1358 specify an alternate location. 1359 1360 Ideally, we want to handle this configuration in a central location. The 1361 Cedar Backup path resolver mechanism (a singleton called 1362 L{PathResolverSingleton}) provides the central location to store the 1363 mappings. This function wraps access to the singleton, and is what all 1364 functions (extensions or standard functionality) should call if they need to 1365 find a command. 1366 1367 The passed-in command must actually be a list, in the standard form used by 1368 all existing Cedar Backup code (something like C{["svnlook", ]}). The 1369 lookup will actually be done on the first element in the list, and the 1370 returned command will always be in list form as well. 1371 1372 If the passed-in command can't be resolved or no mapping exists, then the 1373 command itself will be returned unchanged. This way, we neatly fall back on 1374 default behavior if we have no sensible alternative. 1375 1376 @param command: Command to resolve. 1377 @type command: List form of command, i.e. C{["svnlook", ]}. 1378 1379 @return: Path to command or just command itself if no mapping exists. 1380 """ 1381 singleton = PathResolverSingleton.getInstance() 1382 name = command[0] 1383 result = command[:] 1384 result[0] = singleton.lookup(name, name) 1385 return result
    1386
    1387 1388 ############################ 1389 # executeCommand() function 1390 ############################ 1391 1392 -def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None):
    1393 """ 1394 Executes a shell command, hopefully in a safe way. 1395 1396 This function exists to replace direct calls to C{os.popen} in the Cedar 1397 Backup code. It's not safe to call a function such as C{os.popen()} with 1398 untrusted arguments, since that can cause problems if the string contains 1399 non-safe variables or other constructs (imagine that the argument is 1400 C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; 1401 echo"} in the current environment). 1402 1403 Instead, it's safer to pass a list of arguments in the style supported bt 1404 C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} 1405 class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. 1406 1407 Under the normal case, this function will return a tuple of C{(status, 1408 None)} where the status is the wait-encoded return status of the call per 1409 the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as 1410 C{True}, the function will return a tuple of C{(status, output)} where 1411 C{output} is a list of strings, one entry per line in the output from the 1412 command. Output is always logged to the C{outputLogger.info()} target, 1413 regardless of whether it's returned. 1414 1415 By default, C{stdout} and C{stderr} will be intermingled in the output. 1416 However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be 1417 included in the output. 1418 1419 The C{doNotLog} parameter exists so that callers can force the function to 1420 not log command output to the debug log. Normally, you would want to log. 1421 However, if you're using this function to write huge output files (i.e. 1422 database backups written to C{stdout}) then you might want to avoid putting 1423 all that information into the debug log. 1424 1425 The C{outputFile} parameter exists to make it easier for a caller to push 1426 output into a file, i.e. as a substitute for redirection to a file. If this 1427 value is passed in, each time a line of output is generated, it will be 1428 written to the file using C{outputFile.write()}. At the end, the file 1429 descriptor will be flushed using C{outputFile.flush()}. The caller 1430 maintains responsibility for closing the file object appropriately. 1431 1432 @note: I know that it's a bit confusing that the command and the arguments 1433 are both lists. I could have just required the caller to pass in one big 1434 list. However, I think it makes some sense to keep the command (the 1435 constant part of what we're executing, i.e. C{"scp -B"}) separate from its 1436 arguments, even if they both end up looking kind of similar. 1437 1438 @note: You cannot redirect output via shell constructs (i.e. C{>file}, 1439 C{2>/dev/null}, etc.) using this function. The redirection string would be 1440 passed to the command just like any other argument. However, you can 1441 implement the equivalent to redirection using C{ignoreStderr} and 1442 C{outputFile}, as discussed above. 1443 1444 @note: The operating system environment is partially sanitized before 1445 the command is invoked. See L{sanitizeEnvironment} for details. 1446 1447 @param command: Shell command to execute 1448 @type command: List of individual arguments that make up the command 1449 1450 @param args: List of arguments to the command 1451 @type args: List of additional arguments to the command 1452 1453 @param returnOutput: Indicates whether to return the output of the command 1454 @type returnOutput: Boolean C{True} or C{False} 1455 1456 @param ignoreStderr: Whether stderr should be discarded 1457 @type ignoreStderr: Boolean True or False 1458 1459 @param doNotLog: Indicates that output should not be logged. 1460 @type doNotLog: Boolean C{True} or C{False} 1461 1462 @param outputFile: File object that all output should be written to. 1463 @type outputFile: File object as returned from C{open()} or C{file()}. 1464 1465 @return: Tuple of C{(result, output)} as described above. 1466 """ 1467 logger.debug("Executing command %s with args %s." % (command, args)) 1468 outputLogger.info("Executing command %s with args %s." % (command, args)) 1469 if doNotLog: 1470 logger.debug("Note: output will not be logged, per the doNotLog flag.") 1471 outputLogger.info("Note: output will not be logged, per the doNotLog flag.") 1472 output = [] 1473 fields = command[:] # make sure to copy it so we don't destroy it 1474 fields.extend(args) 1475 try: 1476 sanitizeEnvironment() # make sure we have a consistent environment 1477 try: 1478 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1479 except OSError: 1480 # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. 1481 # So, we attempt it a second time and if that works, we just go on as usual. 1482 # The problem appears to be that we sometimes get a bad stderr file descriptor. 1483 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1484 while True: 1485 line = pipe.stdout.readline() 1486 if not line: break 1487 if returnOutput: output.append(line) 1488 if outputFile is not None: outputFile.write(line) 1489 if not doNotLog: outputLogger.info(line[:-1]) # this way the log will (hopefully) get updated in realtime 1490 if outputFile is not None: 1491 try: # note, not every file-like object can be flushed 1492 outputFile.flush() 1493 except: pass 1494 if returnOutput: 1495 return (pipe.wait(), output) 1496 else: 1497 return (pipe.wait(), None) 1498 except OSError, e: 1499 try: 1500 if returnOutput: 1501 if output != []: 1502 return (pipe.wait(), output) 1503 else: 1504 return (pipe.wait(), [ e, ]) 1505 else: 1506 return (pipe.wait(), None) 1507 except UnboundLocalError: # pipe not set 1508 if returnOutput: 1509 return (256, []) 1510 else: 1511 return (256, None)
    1512
    1513 1514 ############################## 1515 # calculateFileAge() function 1516 ############################## 1517 1518 -def calculateFileAge(path):
    1519 """ 1520 Calculates the age (in days) of a file. 1521 1522 The "age" of a file is the amount of time since the file was last used, per 1523 the most recent of the file's C{st_atime} and C{st_mtime} values. 1524 1525 Technically, we only intend this function to work with files, but it will 1526 probably work with anything on the filesystem. 1527 1528 @param path: Path to a file on disk. 1529 1530 @return: Age of the file in days (possibly fractional). 1531 @raise OSError: If the file doesn't exist. 1532 """ 1533 currentTime = int(time.time()) 1534 fileStats = os.stat(path) 1535 lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" 1536 ageInSeconds = currentTime - lastUse 1537 ageInDays = ageInSeconds / SECONDS_PER_DAY 1538 return ageInDays
    1539
    1540 1541 ################### 1542 # mount() function 1543 ################### 1544 1545 -def mount(devicePath, mountPoint, fsType):
    1546 """ 1547 Mounts the indicated device at the indicated mount point. 1548 1549 For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount 1550 point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any 1551 filesystem type that is supported by C{mount} on your platform. If the type 1552 is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may 1553 not work on all systems. 1554 1555 @note: This only works on platforms that have a concept of "mounting" a 1556 filesystem through a command-line C{"mount"} command, like UNIXes. It 1557 won't work on Windows. 1558 1559 @param devicePath: Path of device to be mounted. 1560 @param mountPoint: Path that device should be mounted at. 1561 @param fsType: Type of the filesystem assumed to be available via the device. 1562 1563 @raise IOError: If the device cannot be mounted. 1564 """ 1565 if fsType is None: 1566 args = [ devicePath, mountPoint ] 1567 else: 1568 args = [ "-t", fsType, devicePath, mountPoint ] 1569 command = resolveCommand(MOUNT_COMMAND) 1570 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] 1571 if result != 0: 1572 raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType))
    1573
    1574 1575 ##################### 1576 # unmount() function 1577 ##################### 1578 1579 -def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0):
    1580 """ 1581 Unmounts whatever device is mounted at the indicated mount point. 1582 1583 Sometimes, it might not be possible to unmount the mount point immediately, 1584 if there are still files open there. Use the C{attempts} and C{waitSeconds} 1585 arguments to indicate how many unmount attempts to make and how many seconds 1586 to wait between attempts. If you pass in zero attempts, no attempts will be 1587 made (duh). 1588 1589 If the indicated mount point is not really a mount point per 1590 C{os.path.ismount()}, then it will be ignored. This seems to be a safer 1591 check then looking through C{/etc/mtab}, since C{ismount()} is already in 1592 the Python standard library and is documented as working on all POSIX 1593 systems. 1594 1595 If C{removeAfter} is C{True}, then the mount point will be removed using 1596 C{os.rmdir()} after the unmount action succeeds. If for some reason the 1597 mount point is not a directory, then it will not be removed. 1598 1599 @note: This only works on platforms that have a concept of "mounting" a 1600 filesystem through a command-line C{"mount"} command, like UNIXes. It 1601 won't work on Windows. 1602 1603 @param mountPoint: Mount point to be unmounted. 1604 @param removeAfter: Remove the mount point after unmounting it. 1605 @param attempts: Number of times to attempt the unmount. 1606 @param waitSeconds: Number of seconds to wait between repeated attempts. 1607 1608 @raise IOError: If the mount point is still mounted after attempts are exhausted. 1609 """ 1610 if os.path.ismount(mountPoint): 1611 for attempt in range(0, attempts): 1612 logger.debug("Making attempt %d to unmount [%s]." % (attempt, mountPoint)) 1613 command = resolveCommand(UMOUNT_COMMAND) 1614 result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] 1615 if result != 0: 1616 logger.error("Error [%d] unmounting [%s] on attempt %d." % (result, mountPoint, attempt)) 1617 elif os.path.ismount(mountPoint): 1618 logger.error("After attempt %d, [%s] is still mounted." % (attempt, mountPoint)) 1619 else: 1620 logger.debug("Successfully unmounted [%s] on attempt %d." % (mountPoint, attempt)) 1621 break # this will cause us to skip the loop else: clause 1622 if attempt+1 < attempts: # i.e. this isn't the last attempt 1623 if waitSeconds > 0: 1624 logger.info("Sleeping %d second(s) before next unmount attempt." % waitSeconds) 1625 time.sleep(waitSeconds) 1626 else: 1627 if os.path.ismount(mountPoint): 1628 raise IOError("Unable to unmount [%s] after %d attempts." % (mountPoint, attempts)) 1629 logger.info("Mount point [%s] seems to have finally gone away." % mountPoint) 1630 if os.path.isdir(mountPoint) and removeAfter: 1631 logger.debug("Removing mount point [%s]." % mountPoint) 1632 os.rmdir(mountPoint)
    1633
    1634 1635 ########################### 1636 # deviceMounted() function 1637 ########################### 1638 1639 -def deviceMounted(devicePath):
    1640 """ 1641 Indicates whether a specific filesystem device is currently mounted. 1642 1643 We determine whether the device is mounted by looking through the system's 1644 C{mtab} file. This file shows every currently-mounted filesystem, ordered 1645 by device. We only do the check if the C{mtab} file exists and is readable. 1646 Otherwise, we assume that the device is not mounted. 1647 1648 @note: This only works on platforms that have a concept of an mtab file 1649 to show mounted volumes, like UNIXes. It won't work on Windows. 1650 1651 @param devicePath: Path of device to be checked 1652 1653 @return: True if device is mounted, false otherwise. 1654 """ 1655 if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): 1656 realPath = os.path.realpath(devicePath) 1657 lines = open(MTAB_FILE).readlines() 1658 for line in lines: 1659 (mountDevice, mountPoint, remainder) = line.split(None, 2) 1660 if mountDevice in [ devicePath, realPath, ]: 1661 logger.debug("Device [%s] is mounted at [%s]." % (devicePath, mountPoint)) 1662 return True 1663 return False
    1664
    1665 1666 ######################## 1667 # encodePath() function 1668 ######################## 1669 1670 -def encodePath(path):
    1671 1672 r""" 1673 Safely encodes a filesystem path. 1674 1675 Many Python filesystem functions, such as C{os.listdir}, behave differently 1676 if they are passed unicode arguments versus simple string arguments. For 1677 instance, C{os.listdir} generally returns unicode path names if it is passed 1678 a unicode argument, and string pathnames if it is passed a string argument. 1679 1680 However, this behavior often isn't as consistent as we might like. As an example, 1681 C{os.listdir} "gives up" if it finds a filename that it can't properly encode 1682 given the current locale settings. This means that the returned list is 1683 a mixed set of unicode and simple string paths. This has consequences later, 1684 because other filesystem functions like C{os.path.join} will blow up if they 1685 are given one string path and one unicode path. 1686 1687 On comp.lang.python, Martin v. Löwis explained the C{os.listdir} behavior 1688 like this:: 1689 1690 The operating system (POSIX) does not have the inherent notion that file 1691 names are character strings. Instead, in POSIX, file names are primarily 1692 byte strings. There are some bytes which are interpreted as characters 1693 (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from 1694 that, most OS layers think these are just bytes. 1695 1696 Now, most *people* think that file names are character strings. To 1697 interpret a file name as a character string, you need to know what the 1698 encoding is to interpret the file names (which are byte strings) as 1699 character strings. 1700 1701 There is, unfortunately, no operating system API to carry the notion of a 1702 file system encoding. By convention, the locale settings should be used 1703 to establish this encoding, in particular the LC_CTYPE facet of the 1704 locale. This is defined in the environment variables LC_CTYPE, LC_ALL, 1705 and LANG (searched in this order). 1706 1707 If LANG is not set, the "C" locale is assumed, which uses ASCII as its 1708 file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a 1709 valid file name (at least it cannot be interpreted as characters, and 1710 hence not be converted to Unicode). 1711 1712 Now, your Python script has requested that all file names *should* be 1713 returned as character (ie. Unicode) strings, but Python cannot comply, 1714 since there is no way to find out what this byte string means, in terms 1715 of characters. 1716 1717 So we have three options: 1718 1719 1. Skip this string, only return the ones that can be converted to Unicode. 1720 Give the user the impression the file does not exist. 1721 2. Return the string as a byte string 1722 3. Refuse to listdir altogether, raising an exception (i.e. return nothing) 1723 1724 Python has chosen alternative 2, allowing the application to implement 1 1725 or 3 on top of that if it wants to (or come up with other strategies, 1726 such as user feedback). 1727 1728 As a solution, he suggests that rather than passing unicode paths into the 1729 filesystem functions, that I should sensibly encode the path first. That is 1730 what this function accomplishes. Any function which takes a filesystem path 1731 as an argument should encode it first, before using it for any other purpose. 1732 1733 I confess I still don't completely understand how this works. On a system 1734 with filesystem encoding "ISO-8859-1", a path C{u"\xe2\x99\xaa\xe2\x99\xac"} 1735 is converted into the string C{"\xe2\x99\xaa\xe2\x99\xac"}. However, on a 1736 system with a "utf-8" encoding, the result is a completely different string: 1737 C{"\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac"}. A quick test where I 1738 write to the first filename and open the second proves that the two strings 1739 represent the same file on disk, which is all I really care about. 1740 1741 @note: As a special case, if C{path} is C{None}, then this function will 1742 return C{None}. 1743 1744 @note: To provide several examples of encoding values, my Debian sarge box 1745 with an ext3 filesystem has Python filesystem encoding C{ISO-8859-1}. User 1746 Anarcat's Debian box with a xfs filesystem has filesystem encoding 1747 C{ANSI_X3.4-1968}. Both my iBook G4 running Mac OS X 10.4 and user Dag 1748 Rende's SuSE 9.3 box both have filesystem encoding C{UTF-8}. 1749 1750 @note: Just because a filesystem has C{UTF-8} encoding doesn't mean that it 1751 will be able to handle all extended-character filenames. For instance, 1752 certain extended-character (but not UTF-8) filenames -- like the ones in the 1753 regression test tar file C{test/data/tree13.tar.gz} -- are not valid under 1754 Mac OS X, and it's not even possible to extract them from the tarfile on 1755 that platform. 1756 1757 @param path: Path to encode 1758 1759 @return: Path, as a string, encoded appropriately 1760 @raise ValueError: If the path cannot be encoded properly. 1761 """ 1762 if path is None: 1763 return path 1764 try: 1765 if isinstance(path, unicode): 1766 encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() 1767 path = path.encode(encoding) 1768 return path 1769 except UnicodeError: 1770 raise ValueError("Path could not be safely encoded as %s." % encoding)
    1771
    1772 1773 ######################## 1774 # nullDevice() function 1775 ######################## 1776 1777 -def nullDevice():
    1778 """ 1779 Attempts to portably return the null device on this system. 1780 1781 The null device is something like C{/dev/null} on a UNIX system. The name 1782 varies on other platforms. 1783 """ 1784 return os.devnull
    1785
    1786 1787 ############################## 1788 # deriveDayOfWeek() function 1789 ############################## 1790 1791 -def deriveDayOfWeek(dayName):
    1792 """ 1793 Converts English day name to numeric day of week as from C{time.localtime}. 1794 1795 For instance, the day C{monday} would be converted to the number C{0}. 1796 1797 @param dayName: Day of week to convert 1798 @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1799 1800 @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. 1801 """ 1802 if dayName.lower() == "monday": 1803 return 0 1804 elif dayName.lower() == "tuesday": 1805 return 1 1806 elif dayName.lower() == "wednesday": 1807 return 2 1808 elif dayName.lower() == "thursday": 1809 return 3 1810 elif dayName.lower() == "friday": 1811 return 4 1812 elif dayName.lower() == "saturday": 1813 return 5 1814 elif dayName.lower() == "sunday": 1815 return 6 1816 else: 1817 return -1 # What else can we do?? Thrown an exception, I guess.
    1818
    1819 1820 ########################### 1821 # isStartOfWeek() function 1822 ########################### 1823 1824 -def isStartOfWeek(startingDay):
    1825 """ 1826 Indicates whether "today" is the backup starting day per configuration. 1827 1828 If the current day's English name matches the indicated starting day, then 1829 today is a starting day. 1830 1831 @param startingDay: Configured starting day. 1832 @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1833 1834 @return: Boolean indicating whether today is the starting day. 1835 """ 1836 value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) 1837 if value: 1838 logger.debug("Today is the start of the week.") 1839 else: 1840 logger.debug("Today is NOT the start of the week.") 1841 return value
    1842
    1843 1844 ################################# 1845 # buildNormalizedPath() function 1846 ################################# 1847 1848 -def buildNormalizedPath(path):
    1849 """ 1850 Returns a "normalized" path based on a path name. 1851 1852 A normalized path is a representation of a path that is also a valid file 1853 name. To make a valid file name out of a complete path, we have to convert 1854 or remove some characters that are significant to the filesystem -- in 1855 particular, the path separator and any leading C{'.'} character (which would 1856 cause the file to be hidden in a file listing). 1857 1858 Note that this is a one-way transformation -- you can't safely derive the 1859 original path from the normalized path. 1860 1861 To normalize a path, we begin by looking at the first character. If the 1862 first character is C{'/'} or C{'\\'}, it gets removed. If the first 1863 character is C{'.'}, it gets converted to C{'_'}. Then, we look through the 1864 rest of the path and convert all remaining C{'/'} or C{'\\'} characters 1865 C{'-'}, and all remaining whitespace characters to C{'_'}. 1866 1867 As a special case, a path consisting only of a single C{'/'} or C{'\\'} 1868 character will be converted to C{'-'}. 1869 1870 @param path: Path to normalize 1871 1872 @return: Normalized path as described above. 1873 1874 @raise ValueError: If the path is None 1875 """ 1876 if path is None: 1877 raise ValueError("Cannot normalize path None.") 1878 elif len(path) == 0: 1879 return path 1880 elif path == "/" or path == "\\": 1881 return "-" 1882 else: 1883 normalized = path 1884 normalized = re.sub(r"^\/", "", normalized) # remove leading '/' 1885 normalized = re.sub(r"^\\", "", normalized) # remove leading '\' 1886 normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden 1887 normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' 1888 normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' 1889 normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' 1890 return normalized
    1891
    1892 1893 ################################# 1894 # sanitizeEnvironment() function 1895 ################################# 1896 1897 -def sanitizeEnvironment():
    1898 """ 1899 Sanitizes the operating system environment. 1900 1901 The operating system environment is contained in C{os.environ}. This method 1902 sanitizes the contents of that dictionary. 1903 1904 Currently, all it does is reset the locale (removing C{$LC_*}) and set the 1905 default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count 1906 on consistent localization regardless of what the end-user has configured. 1907 This is important for code that needs to parse program output. 1908 1909 The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already 1910 set to the proper value, it is not re-set, so we can avoid the memory leaks 1911 that are documented to occur on BSD-based systems. 1912 1913 @return: Copy of the sanitized environment. 1914 """ 1915 for var in LOCALE_VARS: 1916 if os.environ.has_key(var): 1917 del os.environ[var] 1918 if os.environ.has_key(LANG_VAR): 1919 if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) 1920 os.environ[LANG_VAR] = DEFAULT_LANGUAGE 1921 return os.environ.copy()
    1922 1941
    1942 1943 ######################### 1944 # checkUnique() function 1945 ######################### 1946 1947 -def checkUnique(prefix, values):
    1948 """ 1949 Checks that all values are unique. 1950 1951 The values list is checked for duplicate values. If there are 1952 duplicates, an exception is thrown. All duplicate values are listed in 1953 the exception. 1954 1955 @param prefix: Prefix to use in the thrown exception 1956 @param values: List of values to check 1957 1958 @raise ValueError: If there are duplicates in the list 1959 """ 1960 values.sort() 1961 duplicates = [] 1962 for i in range(1, len(values)): 1963 if values[i-1] == values[i]: 1964 duplicates.append(values[i]) 1965 if duplicates: 1966 raise ValueError("%s %s" % (prefix, duplicates))
    1967
    1968 1969 ####################################### 1970 # parseCommaSeparatedString() function 1971 ####################################### 1972 1973 -def parseCommaSeparatedString(commaString):
    1974 """ 1975 Parses a list of values out of a comma-separated string. 1976 1977 The items in the list are split by comma, and then have whitespace 1978 stripped. As a special case, if C{commaString} is C{None}, then C{None} 1979 will be returned. 1980 1981 @param commaString: List of values in comma-separated string format. 1982 @return: Values from commaString split into a list, or C{None}. 1983 """ 1984 if commaString is None: 1985 return None 1986 else: 1987 pass1 = commaString.split(",") 1988 pass2 = [] 1989 for item in pass1: 1990 item = item.strip() 1991 if len(item) > 0: 1992 pass2.append(item) 1993 return pass2
    1994

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.release-pysrc.html0000664000175000017500000002556612143054364026163 0ustar pronovicpronovic00000000000000 CedarBackup2.release
    Package CedarBackup2 :: Module release
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.release

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: release.py 1044 2013-05-10 02:16:12Z pronovic $ 
    15  # Purpose  : Provides location to maintain release information. 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  """ 
    20  Provides location to maintain version information. 
    21   
    22  @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL 
    23   
    24  @var AUTHOR: Author of software. 
    25  @var EMAIL: Email address of author. 
    26  @var COPYRIGHT: Copyright date. 
    27  @var VERSION: Software version. 
    28  @var DATE: Software release date. 
    29  @var URL: URL of Cedar Backup webpage. 
    30   
    31  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    32  """ 
    33   
    34  AUTHOR      = "Kenneth J. Pronovici" 
    35  EMAIL       = "pronovic@ieee.org" 
    36  COPYRIGHT   = "2004-2011,2013" 
    37  VERSION     = "2.22.0" 
    38  DATE        = "09 May 2013" 
    39  URL         = "http://cedar-backup.sourceforge.net/" 
    40   
    

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.constants-module.html0000664000175000017500000000351312143054362031110 0ustar pronovicpronovic00000000000000 constants

    Module constants


    Variables

    COLLECT_INDICATOR
    DIGEST_EXTENSION
    DIR_TIME_FORMAT
    INDICATOR_PATTERN
    STAGE_INDICATOR
    STORE_INDICATOR
    __package__

    [hide private] CedarBackup2-2.22.0/doc/interface/frames.html0000664000175000017500000000111512143054362022413 0ustar pronovicpronovic00000000000000 CedarBackup2 CedarBackup2-2.22.0/doc/interface/redirect.html0000664000175000017500000001225212143054366022747 0ustar pronovicpronovic00000000000000Epydoc Redirect Page

    Epydoc Auto-redirect page

    When javascript is enabled, this page will redirect URLs of the form redirect.html#dotted.name to the documentation for the object with the given fully-qualified dotted name.

     

    CedarBackup2-2.22.0/doc/interface/index.html0000664000175000017500000000111512143054366022251 0ustar pronovicpronovic00000000000000 CedarBackup2 CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.split.SplitConfig-class.html0000664000175000017500000005610412143054363031340 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split.SplitConfig
    Package CedarBackup2 :: Package extend :: Module split :: Class SplitConfig
    [hide private]
    [frames] | no frames]

    Class SplitConfig

    source code

    object --+
             |
            SplitConfig
    

    Class representing split configuration.

    Split configuration is used for splitting staging directories.

    The following restrictions exist on data in this class:

    • The size limit must be a ByteQuantity
    • The split size must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, sizeLimit=None, splitSize=None)
    Constructor for the SplitCOnfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setSizeLimit(self, value)
    Property target used to set the size limit.
    source code
     
    _getSizeLimit(self)
    Property target used to get the size limit.
    source code
     
    _setSplitSize(self, value)
    Property target used to set the split size.
    source code
     
    _getSplitSize(self)
    Property target used to get the split size.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sizeLimit
    Size limit, as a ByteQuantity
      splitSize
    Split size, as a ByteQuantity

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sizeLimit=None, splitSize=None)
    (Constructor)

    source code 

    Constructor for the SplitCOnfig class.

    Parameters:
    • sizeLimit - Size limit of the files, in bytes
    • splitSize - Size that files exceeding the limit will be split into, in bytes
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSizeLimit(self, value)

    source code 

    Property target used to set the size limit. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    _setSplitSize(self, value)

    source code 

    Property target used to set the split size. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    sizeLimit

    Size limit, as a ByteQuantity

    Get Method:
    _getSizeLimit(self) - Property target used to get the size limit.
    Set Method:
    _setSizeLimit(self, value) - Property target used to set the size limit.

    splitSize

    Split size, as a ByteQuantity

    Get Method:
    _getSplitSize(self) - Property target used to get the split size.
    Set Method:
    _setSplitSize(self, value) - Property target used to set the split size.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mysql.MysqlConfig-class.html0000664000175000017500000010167512143054363031370 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql.MysqlConfig
    Package CedarBackup2 :: Package extend :: Module mysql :: Class MysqlConfig
    [hide private]
    [frames] | no frames]

    Class MysqlConfig

    source code

    object --+
             |
            MysqlConfig
    

    Class representing MySQL configuration.

    The MySQL configuration information is used for backing up MySQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    Constructor for the MysqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setPassword(self, value)
    Property target used to set the password value.
    source code
     
    _getPassword(self)
    Property target used to get the password value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      password
    Password associated with user.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the MysqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • password - Password associated with user.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    password

    Password associated with user.

    Get Method:
    _getPassword(self) - Property target used to get the password value.
    Set Method:
    _setPassword(self, value) - Property target used to set the password value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.postgresql.LocalConfig-class.html0000664000175000017500000007612512143054363032354 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql.LocalConfig
    Package CedarBackup2 :: Package extend :: Module postgresql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <postgresql> configuration section as the next child of a parent.
    source code
     
    _setPostgresql(self, value)
    Property target used to set the postgresql configuration value.
    source code
     
    _getPostgresql(self)
    Property target used to get the postgresql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parsePostgresql(parent)
    Parses a postgresql configuration section.
    source code
    Properties [hide private]
      postgresql
    Postgresql configuration in terms of a PostgresqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <postgresql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/postgresql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setPostgresql(self, value)

    source code 

    Property target used to set the postgresql configuration value. If not None, the value must be a PostgresqlConfig object.

    Raises:
    • ValueError - If the value is not a PostgresqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the postgresql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parsePostgresql(parent)
    Static Method

    source code 

    Parses a postgresql configuration section.

    We read the following fields:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/postgresql/database
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    PostgresqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    postgresql

    Postgresql configuration in terms of a PostgresqlConfig object.

    Get Method:
    _getPostgresql(self) - Property target used to get the postgresql configuration value.
    Set Method:
    _setPostgresql(self, value) - Property target used to set the postgresql configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.peer-pysrc.html0000664000175000017500000127463512143054364025502 0ustar pronovicpronovic00000000000000 CedarBackup2.peer
    Package CedarBackup2 :: Module peer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.peer

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python (>= 2.5) 
      29  # Project  : Cedar Backup, release 2 
      30  # Revision : $Id: peer.py 1006 2010-07-07 21:03:57Z pronovic $ 
      31  # Purpose  : Provides backup peer-related objects. 
      32  # 
      33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      34   
      35  ######################################################################## 
      36  # Module documentation 
      37  ######################################################################## 
      38   
      39  """ 
      40  Provides backup peer-related objects and utility functions. 
      41   
      42  @sort: LocalPeer, RemotePeer 
      43   
      44  @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. 
      45  @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. 
      46   
      47  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      48  """ 
      49   
      50   
      51  ######################################################################## 
      52  # Imported modules 
      53  ######################################################################## 
      54   
      55  # System modules 
      56  import os 
      57  import logging 
      58  import shutil 
      59   
      60  # Cedar Backup modules 
      61  from CedarBackup2.filesystem import FilesystemList 
      62  from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot 
      63  from CedarBackup2.util import splitCommandLine, encodePath 
      64  from CedarBackup2.config import VALID_FAILURE_MODES 
      65   
      66   
      67  ######################################################################## 
      68  # Module-wide constants and variables 
      69  ######################################################################## 
      70   
      71  logger                  = logging.getLogger("CedarBackup2.log.peer") 
      72   
      73  DEF_RCP_COMMAND         = [ "/usr/bin/scp", "-B", "-q", "-C" ] 
      74  DEF_RSH_COMMAND         = [ "/usr/bin/ssh", ] 
      75  DEF_CBACK_COMMAND       = "/usr/bin/cback" 
      76   
      77  DEF_COLLECT_INDICATOR   = "cback.collect" 
      78  DEF_STAGE_INDICATOR     = "cback.stage" 
      79   
      80  SU_COMMAND              = [ "su" ] 
    
    81 82 83 ######################################################################## 84 # LocalPeer class definition 85 ######################################################################## 86 87 -class LocalPeer(object):
    88 89 ###################### 90 # Class documentation 91 ###################### 92 93 """ 94 Backup peer representing a local peer in a backup pool. 95 96 This is a class representing a local (non-network) peer in a backup pool. 97 Local peers are backed up by simple filesystem copy operations. A local 98 peer has associated with it a name (typically, but not necessarily, a 99 hostname) and a collect directory. 100 101 The public methods other than the constructor are part of a "backup peer" 102 interface shared with the C{RemotePeer} class. 103 104 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 105 _copyLocalDir, _copyLocalFile, name, collectDir 106 """ 107 108 ############## 109 # Constructor 110 ############## 111
    112 - def __init__(self, name, collectDir, ignoreFailureMode=None):
    113 """ 114 Initializes a local backup peer. 115 116 Note that the collect directory must be an absolute path, but does not 117 have to exist when the object is instantiated. We do a lazy validation 118 on this value since we could (potentially) be creating peer objects 119 before an ongoing backup completed. 120 121 @param name: Name of the backup peer 122 @type name: String, typically a hostname 123 124 @param collectDir: Path to the peer's collect directory 125 @type collectDir: String representing an absolute local path on disk 126 127 @param ignoreFailureMode: Ignore failure mode for this peer 128 @type ignoreFailureMode: One of VALID_FAILURE_MODES 129 130 @raise ValueError: If the name is empty. 131 @raise ValueError: If collect directory is not an absolute path. 132 """ 133 self._name = None 134 self._collectDir = None 135 self._ignoreFailureMode = None 136 self.name = name 137 self.collectDir = collectDir 138 self.ignoreFailureMode = ignoreFailureMode
    139 140 141 ############# 142 # Properties 143 ############# 144
    145 - def _setName(self, value):
    146 """ 147 Property target used to set the peer name. 148 The value must be a non-empty string and cannot be C{None}. 149 @raise ValueError: If the value is an empty string or C{None}. 150 """ 151 if value is None or len(value) < 1: 152 raise ValueError("Peer name must be a non-empty string.") 153 self._name = value
    154
    155 - def _getName(self):
    156 """ 157 Property target used to get the peer name. 158 """ 159 return self._name
    160
    161 - def _setCollectDir(self, value):
    162 """ 163 Property target used to set the collect directory. 164 The value must be an absolute path and cannot be C{None}. 165 It does not have to exist on disk at the time of assignment. 166 @raise ValueError: If the value is C{None} or is not an absolute path. 167 @raise ValueError: If a path cannot be encoded properly. 168 """ 169 if value is None or not os.path.isabs(value): 170 raise ValueError("Collect directory must be an absolute path.") 171 self._collectDir = encodePath(value)
    172
    173 - def _getCollectDir(self):
    174 """ 175 Property target used to get the collect directory. 176 """ 177 return self._collectDir
    178
    179 - def _setIgnoreFailureMode(self, value):
    180 """ 181 Property target used to set the ignoreFailure mode. 182 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 183 @raise ValueError: If the value is not valid. 184 """ 185 if value is not None: 186 if value not in VALID_FAILURE_MODES: 187 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 188 self._ignoreFailureMode = value
    189
    190 - def _getIgnoreFailureMode(self):
    191 """ 192 Property target used to get the ignoreFailure mode. 193 """ 194 return self._ignoreFailureMode
    195 196 name = property(_getName, _setName, None, "Name of the peer.") 197 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 198 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 199 200 201 ################# 202 # Public methods 203 ################# 204
    205 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    206 """ 207 Stages data from the peer into the indicated local target directory. 208 209 The collect and target directories must both already exist before this 210 method is called. If passed in, ownership and permissions will be 211 applied to the files that are copied. 212 213 @note: The caller is responsible for checking that the indicator exists, 214 if they care. This function only stages the files within the directory. 215 216 @note: If you have user/group as strings, call the L{util.getUidGid} function 217 to get the associated uid/gid as an ownership tuple. 218 219 @param targetDir: Target directory to write data into 220 @type targetDir: String representing a directory on disk 221 222 @param ownership: Owner and group that the staged files should have 223 @type ownership: Tuple of numeric ids C{(uid, gid)} 224 225 @param permissions: Permissions that the staged files should have 226 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 227 228 @return: Number of files copied from the source directory to the target directory. 229 230 @raise ValueError: If collect directory is not a directory or does not exist 231 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 232 @raise ValueError: If a path cannot be encoded properly. 233 @raise IOError: If there were no files to stage (i.e. the directory was empty) 234 @raise IOError: If there is an IO error copying a file. 235 @raise OSError: If there is an OS error copying or changing permissions on a file 236 """ 237 targetDir = encodePath(targetDir) 238 if not os.path.isabs(targetDir): 239 logger.debug("Target directory [%s] not an absolute path." % targetDir) 240 raise ValueError("Target directory must be an absolute path.") 241 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 242 logger.debug("Collect directory [%s] is not a directory or does not exist on disk." % self.collectDir) 243 raise ValueError("Collect directory is not a directory or does not exist on disk.") 244 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 245 logger.debug("Target directory [%s] is not a directory or does not exist on disk." % targetDir) 246 raise ValueError("Target directory is not a directory or does not exist on disk.") 247 count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) 248 if count == 0: 249 raise IOError("Did not copy any files from local peer.") 250 return count
    251
    252 - def checkCollectIndicator(self, collectIndicator=None):
    253 """ 254 Checks the collect indicator in the peer's staging directory. 255 256 When a peer has completed collecting its backup files, it will write an 257 empty indicator file into its collect directory. This method checks to 258 see whether that indicator has been written. We're "stupid" here - if 259 the collect directory doesn't exist, you'll naturally get back C{False}. 260 261 If you need to, you can override the name of the collect indicator file 262 by passing in a different name. 263 264 @param collectIndicator: Name of the collect indicator file to check 265 @type collectIndicator: String representing name of a file in the collect directory 266 267 @return: Boolean true/false depending on whether the indicator exists. 268 @raise ValueError: If a path cannot be encoded properly. 269 """ 270 collectIndicator = encodePath(collectIndicator) 271 if collectIndicator is None: 272 return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) 273 else: 274 return os.path.exists(os.path.join(self.collectDir, collectIndicator))
    275
    276 - def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None):
    277 """ 278 Writes the stage indicator in the peer's staging directory. 279 280 When the master has completed collecting its backup files, it will write 281 an empty indicator file into the peer's collect directory. The presence 282 of this file implies that the staging process is complete. 283 284 If you need to, you can override the name of the stage indicator file by 285 passing in a different name. 286 287 @note: If you have user/group as strings, call the L{util.getUidGid} 288 function to get the associated uid/gid as an ownership tuple. 289 290 @param stageIndicator: Name of the indicator file to write 291 @type stageIndicator: String representing name of a file in the collect directory 292 293 @param ownership: Owner and group that the indicator file should have 294 @type ownership: Tuple of numeric ids C{(uid, gid)} 295 296 @param permissions: Permissions that the indicator file should have 297 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 298 299 @raise ValueError: If collect directory is not a directory or does not exist 300 @raise ValueError: If a path cannot be encoded properly. 301 @raise IOError: If there is an IO error creating the file. 302 @raise OSError: If there is an OS error creating or changing permissions on the file 303 """ 304 stageIndicator = encodePath(stageIndicator) 305 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 306 logger.debug("Collect directory [%s] is not a directory or does not exist on disk." % self.collectDir) 307 raise ValueError("Collect directory is not a directory or does not exist on disk.") 308 if stageIndicator is None: 309 fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 310 else: 311 fileName = os.path.join(self.collectDir, stageIndicator) 312 LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target
    313 314 315 ################## 316 # Private methods 317 ################## 318 319 @staticmethod
    320 - def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None):
    321 """ 322 Copies files from the source directory to the target directory. 323 324 This function is not recursive. Only the files in the directory will be 325 copied. Ownership and permissions will be left at their default values 326 if new values are not specified. The source and target directories are 327 allowed to be soft links to a directory, but besides that soft links are 328 ignored. 329 330 @note: If you have user/group as strings, call the L{util.getUidGid} 331 function to get the associated uid/gid as an ownership tuple. 332 333 @param sourceDir: Source directory 334 @type sourceDir: String representing a directory on disk 335 336 @param targetDir: Target directory 337 @type targetDir: String representing a directory on disk 338 339 @param ownership: Owner and group that the copied files should have 340 @type ownership: Tuple of numeric ids C{(uid, gid)} 341 342 @param permissions: Permissions that the staged files should have 343 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 344 345 @return: Number of files copied from the source directory to the target directory. 346 347 @raise ValueError: If source or target is not a directory or does not exist. 348 @raise ValueError: If a path cannot be encoded properly. 349 @raise IOError: If there is an IO error copying the files. 350 @raise OSError: If there is an OS error copying or changing permissions on a files 351 """ 352 filesCopied = 0 353 sourceDir = encodePath(sourceDir) 354 targetDir = encodePath(targetDir) 355 for fileName in os.listdir(sourceDir): 356 sourceFile = os.path.join(sourceDir, fileName) 357 targetFile = os.path.join(targetDir, fileName) 358 LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) 359 filesCopied += 1 360 return filesCopied
    361 362 @staticmethod
    363 - def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True):
    364 """ 365 Copies a source file to a target file. 366 367 If the source file is C{None} then the target file will be created or 368 overwritten as an empty file. If the target file is C{None}, this method 369 is a no-op. Attempting to copy a soft link or a directory will result in 370 an exception. 371 372 @note: If you have user/group as strings, call the L{util.getUidGid} 373 function to get the associated uid/gid as an ownership tuple. 374 375 @note: We will not overwrite a target file that exists when this method 376 is invoked. If the target already exists, we'll raise an exception. 377 378 @param sourceFile: Source file to copy 379 @type sourceFile: String representing a file on disk, as an absolute path 380 381 @param targetFile: Target file to create 382 @type targetFile: String representing a file on disk, as an absolute path 383 384 @param ownership: Owner and group that the copied should have 385 @type ownership: Tuple of numeric ids C{(uid, gid)} 386 387 @param permissions: Permissions that the staged files should have 388 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 389 390 @param overwrite: Indicates whether it's OK to overwrite the target file. 391 @type overwrite: Boolean true/false. 392 393 @raise ValueError: If the passed-in source file is not a regular file. 394 @raise ValueError: If a path cannot be encoded properly. 395 @raise IOError: If the target file already exists. 396 @raise IOError: If there is an IO error copying the file 397 @raise OSError: If there is an OS error copying or changing permissions on a file 398 """ 399 targetFile = encodePath(targetFile) 400 sourceFile = encodePath(sourceFile) 401 if targetFile is None: 402 return 403 if not overwrite: 404 if os.path.exists(targetFile): 405 raise IOError("Target file [%s] already exists." % targetFile) 406 if sourceFile is None: 407 open(targetFile, "w").write("") 408 else: 409 if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): 410 shutil.copy(sourceFile, targetFile) 411 else: 412 logger.debug("Source [%s] is not a regular file." % sourceFile) 413 raise ValueError("Source is not a regular file.") 414 if ownership is not None: 415 os.chown(targetFile, ownership[0], ownership[1]) 416 if permissions is not None: 417 os.chmod(targetFile, permissions)
    418
    419 420 ######################################################################## 421 # RemotePeer class definition 422 ######################################################################## 423 424 -class RemotePeer(object):
    425 426 ###################### 427 # Class documentation 428 ###################### 429 430 """ 431 Backup peer representing a remote peer in a backup pool. 432 433 This is a class representing a remote (networked) peer in a backup pool. 434 Remote peers are backed up using an rcp-compatible copy command. A remote 435 peer has associated with it a name (which must be a valid hostname), a 436 collect directory, a working directory and a copy method (an rcp-compatible 437 command). 438 439 You can also set an optional local user value. This username will be used 440 as the local user for any remote copies that are required. It can only be 441 used if the root user is executing the backup. The root user will C{su} to 442 the local user and execute the remote copies as that user. 443 444 The copy method is associated with the peer and not with the actual request 445 to copy, because we can envision that each remote host might have a 446 different connect method. 447 448 The public methods other than the constructor are part of a "backup peer" 449 interface shared with the C{LocalPeer} class. 450 451 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 452 executeRemoteCommand, executeManagedAction, _getDirContents, 453 _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, 454 remoteUser, rcpCommand, rshCommand, cbackCommand 455 """ 456 457 ############## 458 # Constructor 459 ############## 460
    461 - def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, 462 rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, 463 ignoreFailureMode=None):
    464 """ 465 Initializes a remote backup peer. 466 467 @note: If provided, each command will eventually be parsed into a list of 468 strings suitable for passing to C{util.executeCommand} in order to avoid 469 security holes related to shell interpolation. This parsing will be 470 done by the L{util.splitCommandLine} function. See the documentation for 471 that function for some important notes about its limitations. 472 473 @param name: Name of the backup peer 474 @type name: String, must be a valid DNS hostname 475 476 @param collectDir: Path to the peer's collect directory 477 @type collectDir: String representing an absolute path on the remote peer 478 479 @param workingDir: Working directory that can be used to create temporary files, etc. 480 @type workingDir: String representing an absolute path on the current host. 481 482 @param remoteUser: Name of the Cedar Backup user on the remote peer 483 @type remoteUser: String representing a username, valid via remote shell to the peer 484 485 @param localUser: Name of the Cedar Backup user on the current host 486 @type localUser: String representing a username, valid on the current host 487 488 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 489 @type rcpCommand: String representing a system command including required arguments 490 491 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 492 @type rshCommand: String representing a system command including required arguments 493 494 @param cbackCommand: A chack-compatible command to use for executing managed actions 495 @type cbackCommand: String representing a system command including required arguments 496 497 @param ignoreFailureMode: Ignore failure mode for this peer 498 @type ignoreFailureMode: One of VALID_FAILURE_MODES 499 500 @raise ValueError: If collect directory is not an absolute path 501 """ 502 self._name = None 503 self._collectDir = None 504 self._workingDir = None 505 self._remoteUser = None 506 self._localUser = None 507 self._rcpCommand = None 508 self._rcpCommandList = None 509 self._rshCommand = None 510 self._rshCommandList = None 511 self._cbackCommand = None 512 self._ignoreFailureMode = None 513 self.name = name 514 self.collectDir = collectDir 515 self.workingDir = workingDir 516 self.remoteUser = remoteUser 517 self.localUser = localUser 518 self.rcpCommand = rcpCommand 519 self.rshCommand = rshCommand 520 self.cbackCommand = cbackCommand 521 self.ignoreFailureMode = ignoreFailureMode
    522 523 524 ############# 525 # Properties 526 ############# 527
    528 - def _setName(self, value):
    529 """ 530 Property target used to set the peer name. 531 The value must be a non-empty string and cannot be C{None}. 532 @raise ValueError: If the value is an empty string or C{None}. 533 """ 534 if value is None or len(value) < 1: 535 raise ValueError("Peer name must be a non-empty string.") 536 self._name = value
    537
    538 - def _getName(self):
    539 """ 540 Property target used to get the peer name. 541 """ 542 return self._name
    543
    544 - def _setCollectDir(self, value):
    545 """ 546 Property target used to set the collect directory. 547 The value must be an absolute path and cannot be C{None}. 548 It does not have to exist on disk at the time of assignment. 549 @raise ValueError: If the value is C{None} or is not an absolute path. 550 @raise ValueError: If the value cannot be encoded properly. 551 """ 552 if value is not None: 553 if not os.path.isabs(value): 554 raise ValueError("Collect directory must be an absolute path.") 555 self._collectDir = encodePath(value)
    556
    557 - def _getCollectDir(self):
    558 """ 559 Property target used to get the collect directory. 560 """ 561 return self._collectDir
    562
    563 - def _setWorkingDir(self, value):
    564 """ 565 Property target used to set the working directory. 566 The value must be an absolute path and cannot be C{None}. 567 @raise ValueError: If the value is C{None} or is not an absolute path. 568 @raise ValueError: If the value cannot be encoded properly. 569 """ 570 if value is not None: 571 if not os.path.isabs(value): 572 raise ValueError("Working directory must be an absolute path.") 573 self._workingDir = encodePath(value)
    574
    575 - def _getWorkingDir(self):
    576 """ 577 Property target used to get the working directory. 578 """ 579 return self._workingDir
    580
    581 - def _setRemoteUser(self, value):
    582 """ 583 Property target used to set the remote user. 584 The value must be a non-empty string and cannot be C{None}. 585 @raise ValueError: If the value is an empty string or C{None}. 586 """ 587 if value is None or len(value) < 1: 588 raise ValueError("Peer remote user must be a non-empty string.") 589 self._remoteUser = value
    590
    591 - def _getRemoteUser(self):
    592 """ 593 Property target used to get the remote user. 594 """ 595 return self._remoteUser
    596
    597 - def _setLocalUser(self, value):
    598 """ 599 Property target used to set the local user. 600 The value must be a non-empty string if it is not C{None}. 601 @raise ValueError: If the value is an empty string. 602 """ 603 if value is not None: 604 if len(value) < 1: 605 raise ValueError("Peer local user must be a non-empty string.") 606 self._localUser = value
    607
    608 - def _getLocalUser(self):
    609 """ 610 Property target used to get the local user. 611 """ 612 return self._localUser
    613
    614 - def _setRcpCommand(self, value):
    615 """ 616 Property target to set the rcp command. 617 618 The value must be a non-empty string or C{None}. Its value is stored in 619 the two forms: "raw" as provided by the client, and "parsed" into a list 620 suitable for being passed to L{util.executeCommand} via 621 L{util.splitCommandLine}. 622 623 However, all the caller will ever see via the property is the actual 624 value they set (which includes seeing C{None}, even if we translate that 625 internally to C{DEF_RCP_COMMAND}). Internally, we should always use 626 C{self._rcpCommandList} if we want the actual command list. 627 628 @raise ValueError: If the value is an empty string. 629 """ 630 if value is None: 631 self._rcpCommand = None 632 self._rcpCommandList = DEF_RCP_COMMAND 633 else: 634 if len(value) >= 1: 635 self._rcpCommand = value 636 self._rcpCommandList = splitCommandLine(self._rcpCommand) 637 else: 638 raise ValueError("The rcp command must be a non-empty string.")
    639
    640 - def _getRcpCommand(self):
    641 """ 642 Property target used to get the rcp command. 643 """ 644 return self._rcpCommand
    645
    646 - def _setRshCommand(self, value):
    647 """ 648 Property target to set the rsh command. 649 650 The value must be a non-empty string or C{None}. Its value is stored in 651 the two forms: "raw" as provided by the client, and "parsed" into a list 652 suitable for being passed to L{util.executeCommand} via 653 L{util.splitCommandLine}. 654 655 However, all the caller will ever see via the property is the actual 656 value they set (which includes seeing C{None}, even if we translate that 657 internally to C{DEF_RSH_COMMAND}). Internally, we should always use 658 C{self._rshCommandList} if we want the actual command list. 659 660 @raise ValueError: If the value is an empty string. 661 """ 662 if value is None: 663 self._rshCommand = None 664 self._rshCommandList = DEF_RSH_COMMAND 665 else: 666 if len(value) >= 1: 667 self._rshCommand = value 668 self._rshCommandList = splitCommandLine(self._rshCommand) 669 else: 670 raise ValueError("The rsh command must be a non-empty string.")
    671
    672 - def _getRshCommand(self):
    673 """ 674 Property target used to get the rsh command. 675 """ 676 return self._rshCommand
    677
    678 - def _setCbackCommand(self, value):
    679 """ 680 Property target to set the cback command. 681 682 The value must be a non-empty string or C{None}. Unlike the other 683 command, this value is only stored in the "raw" form provided by the 684 client. 685 686 @raise ValueError: If the value is an empty string. 687 """ 688 if value is None: 689 self._cbackCommand = None 690 else: 691 if len(value) >= 1: 692 self._cbackCommand = value 693 else: 694 raise ValueError("The cback command must be a non-empty string.")
    695
    696 - def _getCbackCommand(self):
    697 """ 698 Property target used to get the cback command. 699 """ 700 return self._cbackCommand
    701
    702 - def _setIgnoreFailureMode(self, value):
    703 """ 704 Property target used to set the ignoreFailure mode. 705 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 706 @raise ValueError: If the value is not valid. 707 """ 708 if value is not None: 709 if value not in VALID_FAILURE_MODES: 710 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 711 self._ignoreFailureMode = value
    712
    713 - def _getIgnoreFailureMode(self):
    714 """ 715 Property target used to get the ignoreFailure mode. 716 """ 717 return self._ignoreFailureMode
    718 719 name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") 720 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 721 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") 722 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") 723 localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") 724 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") 725 rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") 726 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") 727 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 728 729 730 ################# 731 # Public methods 732 ################# 733
    734 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    735 """ 736 Stages data from the peer into the indicated local target directory. 737 738 The target directory must already exist before this method is called. If 739 passed in, ownership and permissions will be applied to the files that 740 are copied. 741 742 @note: The returned count of copied files might be inaccurate if some of 743 the copied files already existed in the staging directory prior to the 744 copy taking place. We don't clear the staging directory first, because 745 some extension might also be using it. 746 747 @note: If you have user/group as strings, call the L{util.getUidGid} function 748 to get the associated uid/gid as an ownership tuple. 749 750 @note: Unlike the local peer version of this method, an I/O error might 751 or might not be raised if the directory is empty. Since we're using a 752 remote copy method, we just don't have the fine-grained control over our 753 exceptions that's available when we can look directly at the filesystem, 754 and we can't control whether the remote copy method thinks an empty 755 directory is an error. 756 757 @param targetDir: Target directory to write data into 758 @type targetDir: String representing a directory on disk 759 760 @param ownership: Owner and group that the staged files should have 761 @type ownership: Tuple of numeric ids C{(uid, gid)} 762 763 @param permissions: Permissions that the staged files should have 764 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 765 766 @return: Number of files copied from the source directory to the target directory. 767 768 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 769 @raise ValueError: If a path cannot be encoded properly. 770 @raise IOError: If there were no files to stage (i.e. the directory was empty) 771 @raise IOError: If there is an IO error copying a file. 772 @raise OSError: If there is an OS error copying or changing permissions on a file 773 """ 774 targetDir = encodePath(targetDir) 775 if not os.path.isabs(targetDir): 776 logger.debug("Target directory [%s] not an absolute path." % targetDir) 777 raise ValueError("Target directory must be an absolute path.") 778 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 779 logger.debug("Target directory [%s] is not a directory or does not exist on disk." % targetDir) 780 raise ValueError("Target directory is not a directory or does not exist on disk.") 781 count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, 782 self._rcpCommand, self._rcpCommandList, 783 self.collectDir, targetDir, 784 ownership, permissions) 785 if count == 0: 786 raise IOError("Did not copy any files from local peer.") 787 return count
    788
    789 - def checkCollectIndicator(self, collectIndicator=None):
    790 """ 791 Checks the collect indicator in the peer's staging directory. 792 793 When a peer has completed collecting its backup files, it will write an 794 empty indicator file into its collect directory. This method checks to 795 see whether that indicator has been written. If the remote copy command 796 fails, we return C{False} as if the file weren't there. 797 798 If you need to, you can override the name of the collect indicator file 799 by passing in a different name. 800 801 @note: Apparently, we can't count on all rcp-compatible implementations 802 to return sensible errors for some error conditions. As an example, the 803 C{scp} command in Debian 'woody' returns a zero (normal) status even when 804 it can't find a host or if the login or path is invalid. Because of 805 this, the implementation of this method is rather convoluted. 806 807 @param collectIndicator: Name of the collect indicator file to check 808 @type collectIndicator: String representing name of a file in the collect directory 809 810 @return: Boolean true/false depending on whether the indicator exists. 811 @raise ValueError: If a path cannot be encoded properly. 812 """ 813 try: 814 if collectIndicator is None: 815 sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) 816 targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) 817 else: 818 collectIndicator = encodePath(collectIndicator) 819 sourceFile = os.path.join(self.collectDir, collectIndicator) 820 targetFile = os.path.join(self.workingDir, collectIndicator) 821 logger.debug("Fetch remote [%s] into [%s]." % (sourceFile, targetFile)) 822 if os.path.exists(targetFile): 823 try: 824 os.remove(targetFile) 825 except: 826 raise Exception("Error: collect indicator [%s] already exists!" % targetFile) 827 try: 828 RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, 829 self._rcpCommand, self._rcpCommandList, 830 sourceFile, targetFile, 831 overwrite=False) 832 if os.path.exists(targetFile): 833 return True 834 else: 835 return False 836 except Exception, e: 837 logger.info("Failed looking for collect indicator: %s" % e) 838 return False 839 finally: 840 if os.path.exists(targetFile): 841 try: 842 os.remove(targetFile) 843 except: pass
    844
    845 - def writeStageIndicator(self, stageIndicator=None):
    846 """ 847 Writes the stage indicator in the peer's staging directory. 848 849 When the master has completed collecting its backup files, it will write 850 an empty indicator file into the peer's collect directory. The presence 851 of this file implies that the staging process is complete. 852 853 If you need to, you can override the name of the stage indicator file by 854 passing in a different name. 855 856 @note: If you have user/group as strings, call the L{util.getUidGid} function 857 to get the associated uid/gid as an ownership tuple. 858 859 @param stageIndicator: Name of the indicator file to write 860 @type stageIndicator: String representing name of a file in the collect directory 861 862 @raise ValueError: If a path cannot be encoded properly. 863 @raise IOError: If there is an IO error creating the file. 864 @raise OSError: If there is an OS error creating or changing permissions on the file 865 """ 866 stageIndicator = encodePath(stageIndicator) 867 if stageIndicator is None: 868 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 869 targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 870 else: 871 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 872 targetFile = os.path.join(self.collectDir, stageIndicator) 873 try: 874 if not os.path.exists(sourceFile): 875 open(sourceFile, "w").write("") 876 RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, 877 self._rcpCommand, self._rcpCommandList, 878 sourceFile, targetFile) 879 finally: 880 if os.path.exists(sourceFile): 881 try: 882 os.remove(sourceFile) 883 except: pass
    884
    885 - def executeRemoteCommand(self, command):
    886 """ 887 Executes a command on the peer via remote shell. 888 889 @param command: Command to execute 890 @type command: String command-line suitable for use with rsh. 891 892 @raise IOError: If there is an error executing the command on the remote peer. 893 """ 894 RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, 895 self.name, self._rshCommand, 896 self._rshCommandList, command)
    897
    898 - def executeManagedAction(self, action, fullBackup):
    899 """ 900 Executes a managed action on this peer. 901 902 @param action: Name of the action to execute. 903 @param fullBackup: Whether a full backup should be executed. 904 905 @raise IOError: If there is an error executing the action on the remote peer. 906 """ 907 try: 908 command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) 909 self.executeRemoteCommand(command) 910 except IOError, e: 911 logger.info(e) 912 raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name))
    913 914 915 ################## 916 # Private methods 917 ################## 918 919 @staticmethod
    920 - def _getDirContents(path):
    921 """ 922 Returns the contents of a directory in terms of a Set. 923 924 The directory's contents are read as a L{FilesystemList} containing only 925 files, and then the list is converted into a set object for later use. 926 927 @param path: Directory path to get contents for 928 @type path: String representing a path on disk 929 930 @return: Set of files in the directory 931 @raise ValueError: If path is not a directory or does not exist. 932 """ 933 contents = FilesystemList() 934 contents.excludeDirs = True 935 contents.excludeLinks = True 936 contents.addDirContents(path) 937 try: 938 return set(contents) 939 except: 940 import sets 941 return sets.Set(contents)
    942 943 @staticmethod
    944 - def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, 945 sourceDir, targetDir, ownership=None, permissions=None):
    946 """ 947 Copies files from the source directory to the target directory. 948 949 This function is not recursive. Only the files in the directory will be 950 copied. Ownership and permissions will be left at their default values 951 if new values are not specified. Behavior when copying soft links from 952 the collect directory is dependent on the behavior of the specified rcp 953 command. 954 955 @note: The returned count of copied files might be inaccurate if some of 956 the copied files already existed in the staging directory prior to the 957 copy taking place. We don't clear the staging directory first, because 958 some extension might also be using it. 959 960 @note: If you have user/group as strings, call the L{util.getUidGid} function 961 to get the associated uid/gid as an ownership tuple. 962 963 @note: We don't have a good way of knowing exactly what files we copied 964 down from the remote peer, unless we want to parse the output of the rcp 965 command (ugh). We could change permissions on everything in the target 966 directory, but that's kind of ugly too. Instead, we use Python's set 967 functionality to figure out what files were added while we executed the 968 rcp command. This isn't perfect - for instance, it's not correct if 969 someone else is messing with the directory at the same time we're doing 970 the remote copy - but it's about as good as we're going to get. 971 972 @note: Apparently, we can't count on all rcp-compatible implementations 973 to return sensible errors for some error conditions. As an example, the 974 C{scp} command in Debian 'woody' returns a zero (normal) status even 975 when it can't find a host or if the login or path is invalid. We try 976 to work around this by issuing C{IOError} if we don't copy any files from 977 the remote host. 978 979 @param remoteUser: Name of the Cedar Backup user on the remote peer 980 @type remoteUser: String representing a username, valid via the copy command 981 982 @param localUser: Name of the Cedar Backup user on the current host 983 @type localUser: String representing a username, valid on the current host 984 985 @param remoteHost: Hostname of the remote peer 986 @type remoteHost: String representing a hostname, accessible via the copy command 987 988 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 989 @type rcpCommand: String representing a system command including required arguments 990 991 @param rcpCommandList: An rcp-compatible copy command to use for copying files 992 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 993 994 @param sourceDir: Source directory 995 @type sourceDir: String representing a directory on disk 996 997 @param targetDir: Target directory 998 @type targetDir: String representing a directory on disk 999 1000 @param ownership: Owner and group that the copied files should have 1001 @type ownership: Tuple of numeric ids C{(uid, gid)} 1002 1003 @param permissions: Permissions that the staged files should have 1004 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1005 1006 @return: Number of files copied from the source directory to the target directory. 1007 1008 @raise ValueError: If source or target is not a directory or does not exist. 1009 @raise IOError: If there is an IO error copying the files. 1010 """ 1011 beforeSet = RemotePeer._getDirContents(targetDir) 1012 if localUser is not None: 1013 try: 1014 if not isRunningAsRoot(): 1015 raise IOError("Only root can remote copy as another user.") 1016 except AttributeError: pass 1017 actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) 1018 command = resolveCommand(SU_COMMAND) 1019 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1020 if result != 0: 1021 raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) 1022 else: 1023 copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) 1024 command = resolveCommand(rcpCommandList) 1025 result = executeCommand(command, [copySource, targetDir])[0] 1026 if result != 0: 1027 raise IOError("Error (%d) copying files from remote host." % result) 1028 afterSet = RemotePeer._getDirContents(targetDir) 1029 if len(afterSet) == 0: 1030 raise IOError("Did not copy any files from remote peer.") 1031 differenceSet = afterSet.difference(beforeSet) # files we added as part of copy 1032 if len(differenceSet) == 0: 1033 raise IOError("Apparently did not copy any new files from remote peer.") 1034 for targetFile in differenceSet: 1035 if ownership is not None: 1036 os.chown(targetFile, ownership[0], ownership[1]) 1037 if permissions is not None: 1038 os.chmod(targetFile, permissions) 1039 return len(differenceSet)
    1040 1041 @staticmethod
    1042 - def _copyRemoteFile(remoteUser, localUser, remoteHost, 1043 rcpCommand, rcpCommandList, 1044 sourceFile, targetFile, ownership=None, 1045 permissions=None, overwrite=True):
    1046 """ 1047 Copies a remote source file to a target file. 1048 1049 @note: Internally, we have to go through and escape any spaces in the 1050 source path with double-backslash, otherwise things get screwed up. It 1051 doesn't seem to be required in the target path. I hope this is portable 1052 to various different rcp methods, but I guess it might not be (all I have 1053 to test with is OpenSSH). 1054 1055 @note: If you have user/group as strings, call the L{util.getUidGid} function 1056 to get the associated uid/gid as an ownership tuple. 1057 1058 @note: We will not overwrite a target file that exists when this method 1059 is invoked. If the target already exists, we'll raise an exception. 1060 1061 @note: Apparently, we can't count on all rcp-compatible implementations 1062 to return sensible errors for some error conditions. As an example, the 1063 C{scp} command in Debian 'woody' returns a zero (normal) status even when 1064 it can't find a host or if the login or path is invalid. We try to work 1065 around this by issuing C{IOError} the target file does not exist when 1066 we're done. 1067 1068 @param remoteUser: Name of the Cedar Backup user on the remote peer 1069 @type remoteUser: String representing a username, valid via the copy command 1070 1071 @param remoteHost: Hostname of the remote peer 1072 @type remoteHost: String representing a hostname, accessible via the copy command 1073 1074 @param localUser: Name of the Cedar Backup user on the current host 1075 @type localUser: String representing a username, valid on the current host 1076 1077 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1078 @type rcpCommand: String representing a system command including required arguments 1079 1080 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1081 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1082 1083 @param sourceFile: Source file to copy 1084 @type sourceFile: String representing a file on disk, as an absolute path 1085 1086 @param targetFile: Target file to create 1087 @type targetFile: String representing a file on disk, as an absolute path 1088 1089 @param ownership: Owner and group that the copied should have 1090 @type ownership: Tuple of numeric ids C{(uid, gid)} 1091 1092 @param permissions: Permissions that the staged files should have 1093 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1094 1095 @param overwrite: Indicates whether it's OK to overwrite the target file. 1096 @type overwrite: Boolean true/false. 1097 1098 @raise IOError: If the target file already exists. 1099 @raise IOError: If there is an IO error copying the file 1100 @raise OSError: If there is an OS error changing permissions on the file 1101 """ 1102 if not overwrite: 1103 if os.path.exists(targetFile): 1104 raise IOError("Target file [%s] already exists." % targetFile) 1105 if localUser is not None: 1106 try: 1107 if not isRunningAsRoot(): 1108 raise IOError("Only root can remote copy as another user.") 1109 except AttributeError: pass 1110 actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) 1111 command = resolveCommand(SU_COMMAND) 1112 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1113 if result != 0: 1114 raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) 1115 else: 1116 copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) 1117 command = resolveCommand(rcpCommandList) 1118 result = executeCommand(command, [copySource, targetFile])[0] 1119 if result != 0: 1120 raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) 1121 if not os.path.exists(targetFile): 1122 raise IOError("Apparently unable to copy file from remote host.") 1123 if ownership is not None: 1124 os.chown(targetFile, ownership[0], ownership[1]) 1125 if permissions is not None: 1126 os.chmod(targetFile, permissions)
    1127 1128 @staticmethod
    1129 - def _pushLocalFile(remoteUser, localUser, remoteHost, 1130 rcpCommand, rcpCommandList, 1131 sourceFile, targetFile, overwrite=True):
    1132 """ 1133 Copies a local source file to a remote host. 1134 1135 @note: We will not overwrite a target file that exists when this method 1136 is invoked. If the target already exists, we'll raise an exception. 1137 1138 @note: Internally, we have to go through and escape any spaces in the 1139 source and target paths with double-backslash, otherwise things get 1140 screwed up. I hope this is portable to various different rcp methods, 1141 but I guess it might not be (all I have to test with is OpenSSH). 1142 1143 @note: If you have user/group as strings, call the L{util.getUidGid} function 1144 to get the associated uid/gid as an ownership tuple. 1145 1146 @param remoteUser: Name of the Cedar Backup user on the remote peer 1147 @type remoteUser: String representing a username, valid via the copy command 1148 1149 @param localUser: Name of the Cedar Backup user on the current host 1150 @type localUser: String representing a username, valid on the current host 1151 1152 @param remoteHost: Hostname of the remote peer 1153 @type remoteHost: String representing a hostname, accessible via the copy command 1154 1155 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1156 @type rcpCommand: String representing a system command including required arguments 1157 1158 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1159 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1160 1161 @param sourceFile: Source file to copy 1162 @type sourceFile: String representing a file on disk, as an absolute path 1163 1164 @param targetFile: Target file to create 1165 @type targetFile: String representing a file on disk, as an absolute path 1166 1167 @param overwrite: Indicates whether it's OK to overwrite the target file. 1168 @type overwrite: Boolean true/false. 1169 1170 @raise IOError: If there is an IO error copying the file 1171 @raise OSError: If there is an OS error changing permissions on the file 1172 """ 1173 if not overwrite: 1174 if os.path.exists(targetFile): 1175 raise IOError("Target file [%s] already exists." % targetFile) 1176 if localUser is not None: 1177 try: 1178 if not isRunningAsRoot(): 1179 raise IOError("Only root can remote copy as another user.") 1180 except AttributeError: pass 1181 actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) 1182 command = resolveCommand(SU_COMMAND) 1183 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1184 if result != 0: 1185 raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) 1186 else: 1187 copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) 1188 command = resolveCommand(rcpCommandList) 1189 result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] 1190 if result != 0: 1191 raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile))
    1192 1193 @staticmethod
    1194 - def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand):
    1195 """ 1196 Executes a command on the peer via remote shell. 1197 1198 @param remoteUser: Name of the Cedar Backup user on the remote peer 1199 @type remoteUser: String representing a username, valid on the remote host 1200 1201 @param localUser: Name of the Cedar Backup user on the current host 1202 @type localUser: String representing a username, valid on the current host 1203 1204 @param remoteHost: Hostname of the remote peer 1205 @type remoteHost: String representing a hostname, accessible via the copy command 1206 1207 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 1208 @type rshCommand: String representing a system command including required arguments 1209 1210 @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer 1211 @type rshCommandList: Command as a list to be passed to L{util.executeCommand} 1212 1213 @param remoteCommand: The command to be executed on the remote host 1214 @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) 1215 1216 @raise IOError: If there is an error executing the remote command 1217 """ 1218 actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) 1219 if localUser is not None: 1220 try: 1221 if not isRunningAsRoot(): 1222 raise IOError("Only root can remote shell as another user.") 1223 except AttributeError: pass 1224 command = resolveCommand(SU_COMMAND) 1225 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1226 if result != 0: 1227 raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) 1228 else: 1229 command = resolveCommand(rshCommandList) 1230 result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] 1231 if result != 0: 1232 raise IOError("Command failed [%s]" % (actualCommand))
    1233 1234 @staticmethod
    1235 - def _buildCbackCommand(cbackCommand, action, fullBackup):
    1236 """ 1237 Builds a Cedar Backup command line for the named action. 1238 1239 @note: If the cback command is None, then DEF_CBACK_COMMAND is used. 1240 1241 @param cbackCommand: cback command to execute, including required options 1242 @param action: Name of the action to execute. 1243 @param fullBackup: Whether a full backup should be executed. 1244 1245 @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. 1246 @raise ValueError: If action is None. 1247 """ 1248 if action is None: 1249 raise ValueError("Action cannot be None.") 1250 if cbackCommand is None: 1251 cbackCommand = DEF_CBACK_COMMAND 1252 if fullBackup: 1253 return "%s --full %s" % (cbackCommand, action) 1254 else: 1255 return "%s %s" % (cbackCommand, action)
    1256

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.split.LocalConfig-class.html0000664000175000017500000007433312143054363031303 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split.LocalConfig
    Package CedarBackup2 :: Package extend :: Module split :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <split> configuration section as the next child of a parent.
    source code
     
    _setSplit(self, value)
    Property target used to set the split configuration value.
    source code
     
    _getSplit(self)
    Property target used to get the split configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSplit(parent)
    Parses an split configuration section.
    source code
    Properties [hide private]
      split
    Split configuration in terms of a SplitConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Split configuration must be filled in. Within that, both the size limit and split size must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <split> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSplit(self, value)

    source code 

    Property target used to set the split configuration value. If not None, the value must be a SplitConfig object.

    Raises:
    • ValueError - If the value is not a SplitConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the split configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSplit(parent)
    Static Method

    source code 

    Parses an split configuration section.

    We read the following individual fields:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    split

    Split configuration in terms of a SplitConfig object.

    Get Method:
    _getSplit(self) - Property target used to get the split configuration value.
    Set Method:
    _setSplit(self, value) - Property target used to set the split configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers-module.html0000664000175000017500000001547112143054362026357 0ustar pronovicpronovic00000000000000 CedarBackup2.writers
    Package CedarBackup2 :: Package writers
    [hide private]
    [frames] | no frames]

    Package writers

    source code

    Cedar Backup writers.

    This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter.MediaDefinition-class.html0000664000175000017500000005175412143054363033055 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.MediaDefinition
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about CD media definitions.

    The following media types are accepted:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Note that all of the capacities associated with a media definition are in terms of ISO sectors (util.ISO_SECTOR_SIZE).

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getInitialLeadIn(self)
    Property target used to get the initial lead-in value.
    source code
     
    _getLeadIn(self)
    Property target used to get the lead-in value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      initialLeadIn
    Initial lead-in required for first image written to media.
      leadIn
    Lead-in required on successive images written to media.
      capacity
    Total capacity of the media before any required lead-in.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    initialLeadIn

    Initial lead-in required for first image written to media.

    Get Method:
    _getInitialLeadIn(self) - Property target used to get the initial lead-in value.

    leadIn

    Lead-in required on successive images written to media.

    Get Method:
    _getLeadIn(self) - Property target used to get the lead-in value.

    capacity

    Total capacity of the media before any required lead-in.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.store-pysrc.html0000664000175000017500000045622312143054365027335 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.store
    Package CedarBackup2 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.store

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: store.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Implements the standard 'store' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'store' action. 
     41  @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     44  """ 
     45   
     46   
     47  ######################################################################## 
     48  # Imported modules 
     49  ######################################################################## 
     50   
     51  # System modules 
     52  import sys 
     53  import os 
     54  import logging 
     55  import datetime 
     56  import tempfile 
     57   
     58  # Cedar Backup modules 
     59  from CedarBackup2.filesystem import compareContents 
     60  from CedarBackup2.util import isStartOfWeek 
     61  from CedarBackup2.util import mount, unmount, displayBytes 
     62  from CedarBackup2.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile 
     63  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR 
     64   
     65   
     66  ######################################################################## 
     67  # Module-wide constants and variables 
     68  ######################################################################## 
     69   
     70  logger = logging.getLogger("CedarBackup2.log.actions.store") 
     71   
     72   
     73  ######################################################################## 
     74  # Public functions 
     75  ######################################################################## 
     76   
     77  ########################## 
     78  # executeStore() function 
     79  ########################## 
     80   
    
    81 -def executeStore(configPath, options, config):
    82 """ 83 Executes the store backup action. 84 85 @note: The rebuild action and the store action are very similar. The 86 main difference is that while store only stores a single day's staging 87 directory, the rebuild action operates on multiple staging directories. 88 89 @note: When the store action is complete, we will write a store indicator to 90 the daily staging directory we used, so it's obvious that the store action 91 has completed. 92 93 @param configPath: Path to configuration file on disk. 94 @type configPath: String representing a path on disk. 95 96 @param options: Program command-line options. 97 @type options: Options object. 98 99 @param config: Program configuration. 100 @type config: Config object. 101 102 @raise ValueError: Under many generic error conditions 103 @raise IOError: If there are problems reading or writing files. 104 """ 105 logger.debug("Executing the 'store' action.") 106 if sys.platform == "darwin": 107 logger.warn("Warning: the store action is not fully supported on Mac OS X.") 108 logger.warn("See the Cedar Backup software manual for further information.") 109 if config.options is None or config.store is None: 110 raise ValueError("Store configuration is not properly filled in.") 111 if config.store.checkMedia: 112 checkMediaState(config.store) # raises exception if media is not initialized 113 rebuildMedia = options.full 114 logger.debug("Rebuild media flag [%s]" % rebuildMedia) 115 todayIsStart = isStartOfWeek(config.options.startingDay) 116 stagingDirs = _findCorrectDailyDir(options, config) 117 writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) 118 if config.store.checkData: 119 if sys.platform == "darwin": 120 logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") 121 logger.warn("See the Cedar Backup software manual for further information.") 122 else: 123 logger.debug("Running consistency check of media.") 124 consistencyCheck(config, stagingDirs) 125 writeStoreIndicator(config, stagingDirs) 126 logger.info("Executed the 'store' action successfully.")
    127 128 129 ######################## 130 # writeImage() function 131 ######################## 132
    133 -def writeImage(config, newDisc, stagingDirs):
    134 """ 135 Builds and writes an ISO image containing the indicated stage directories. 136 137 The generated image will contain each of the staging directories listed in 138 C{stagingDirs}. The directories will be placed into the image at the root by 139 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 140 disc at C{/2005/02/10}. 141 142 @note: This function is implemented in terms of L{writeImageBlankSafe}. The 143 C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. 144 145 @param config: Config object. 146 @param newDisc: Indicates whether the disc should be re-initialized 147 @param stagingDirs: Dictionary mapping directory path to date suffix. 148 149 @raise ValueError: Under many generic error conditions 150 @raise IOError: If there is a problem writing the image to disc. 151 """ 152 writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs)
    153 154 155 ################################# 156 # writeImageBlankSafe() function 157 ################################# 158
    159 -def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs):
    160 """ 161 Builds and writes an ISO image containing the indicated stage directories. 162 163 The generated image will contain each of the staging directories listed in 164 C{stagingDirs}. The directories will be placed into the image at the root by 165 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 166 disc at C{/2005/02/10}. The media will always be written with a media 167 label specific to Cedar Backup. 168 169 This function is similar to L{writeImage}, but tries to implement a smarter 170 blanking strategy. 171 172 First, the media is always blanked if the C{rebuildMedia} flag is true. 173 Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} 174 come into effect:: 175 176 If no blanking behavior is specified, and it is the start of the week, 177 the disc will be blanked 178 179 If blanking behavior is specified, and either the blank mode is "daily" 180 or the blank mode is "weekly" and it is the start of the week, then 181 the disc will be blanked if it looks like the weekly backup will not 182 fit onto the media. 183 184 Otherwise, the disc will not be blanked 185 186 How do we decide whether the weekly backup will fit onto the media? That is 187 what the blanking factor is used for. The following formula is used:: 188 189 will backup fit? = (bytes available / (1 + bytes required) <= blankFactor 190 191 The blanking factor will vary from setup to setup, and will probably 192 require some experimentation to get it right. 193 194 @param config: Config object. 195 @param rebuildMedia: Indicates whether media should be rebuilt 196 @param todayIsStart: Indicates whether today is the starting day of the week 197 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 198 @param stagingDirs: Dictionary mapping directory path to date suffix. 199 200 @raise ValueError: Under many generic error conditions 201 @raise IOError: If there is a problem writing the image to disc. 202 """ 203 mediaLabel = buildMediaLabel() 204 writer = createWriter(config) 205 writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc 206 for stageDir in stagingDirs.keys(): 207 logger.debug("Adding stage directory [%s]." % stageDir) 208 dateSuffix = stagingDirs[stageDir] 209 writer.addImageEntry(stageDir, dateSuffix) 210 newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) 211 writer.setImageNewDisc(newDisc) 212 writer.writeImage()
    213
    214 -def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior):
    215 """ 216 Gets a value for the newDisc flag based on blanking factor rules. 217 218 The blanking factor rules are described above by L{writeImageBlankSafe}. 219 220 @param writer: Previously configured image writer containing image entries 221 @param rebuildMedia: Indicates whether media should be rebuilt 222 @param todayIsStart: Indicates whether today is the starting day of the week 223 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 224 225 @return: newDisc flag to be set on writer. 226 """ 227 newDisc = False 228 if rebuildMedia: 229 newDisc = True 230 logger.debug("Setting new disc flag based on rebuildMedia flag.") 231 else: 232 if blankBehavior is None: 233 logger.debug("Default media blanking behavior is in effect.") 234 if todayIsStart: 235 newDisc = True 236 logger.debug("Setting new disc flag based on todayIsStart.") 237 else: 238 # note: validation says we can assume that behavior is fully filled in if it exists at all 239 logger.debug("Optimized media blanking behavior is in effect based on configuration.") 240 if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): 241 logger.debug("New disc flag will be set based on blank factor calculation.") 242 blankFactor = float(blankBehavior.blankFactor) 243 logger.debug("Configured blanking factor: %.2f" % blankFactor) 244 available = writer.retrieveCapacity().bytesAvailable 245 logger.debug("Bytes available: %s" % displayBytes(available)) 246 required = writer.getEstimatedImageSize() 247 logger.debug("Bytes required: %s" % displayBytes(required)) 248 ratio = available / (1.0 + required) 249 logger.debug("Calculated ratio: %.2f" % ratio) 250 newDisc = (ratio <= blankFactor) 251 logger.debug("%.2f <= %.2f ? %s" % (ratio, blankFactor, newDisc)) 252 else: 253 logger.debug("No blank factor calculation is required based on configuration.") 254 logger.debug("New disc flag [%s]." % newDisc) 255 return newDisc
    256 257 258 ################################# 259 # writeStoreIndicator() function 260 ################################# 261
    262 -def writeStoreIndicator(config, stagingDirs):
    263 """ 264 Writes a store indicator file into staging directories. 265 266 The store indicator is written into each of the staging directories when 267 either a store or rebuild action has written the staging directory to disc. 268 269 @param config: Config object. 270 @param stagingDirs: Dictionary mapping directory path to date suffix. 271 """ 272 for stagingDir in stagingDirs.keys(): 273 writeIndicatorFile(stagingDir, STORE_INDICATOR, 274 config.options.backupUser, 275 config.options.backupGroup)
    276 277 278 ############################## 279 # consistencyCheck() function 280 ############################## 281
    282 -def consistencyCheck(config, stagingDirs):
    283 """ 284 Runs a consistency check against media in the backup device. 285 286 It seems that sometimes, it's possible to create a corrupted multisession 287 disc (i.e. one that cannot be read) although no errors were encountered 288 while writing the disc. This consistency check makes sure that the data 289 read from disc matches the data that was used to create the disc. 290 291 The function mounts the device at a temporary mount point in the working 292 directory, and then compares the indicated staging directories in the 293 staging directory and on the media. The comparison is done via 294 functionality in C{filesystem.py}. 295 296 If no exceptions are thrown, there were no problems with the consistency 297 check. A positive confirmation of "no problems" is also written to the log 298 with C{info} priority. 299 300 @warning: The implementation of this function is very UNIX-specific. 301 302 @param config: Config object. 303 @param stagingDirs: Dictionary mapping directory path to date suffix. 304 305 @raise ValueError: If the two directories are not equivalent. 306 @raise IOError: If there is a problem working with the media. 307 """ 308 logger.debug("Running consistency check.") 309 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 310 try: 311 mount(config.store.devicePath, mountPoint, "iso9660") 312 for stagingDir in stagingDirs.keys(): 313 discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) 314 logger.debug("Checking [%s] vs. [%s]." % (stagingDir, discDir)) 315 compareContents(stagingDir, discDir, verbose=True) 316 logger.info("Consistency check completed for [%s]. No problems found." % stagingDir) 317 finally: 318 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    319 320 321 ######################################################################## 322 # Private utility functions 323 ######################################################################## 324 325 ######################### 326 # _findCorrectDailyDir() 327 ######################### 328
    329 -def _findCorrectDailyDir(options, config):
    330 """ 331 Finds the correct daily staging directory to be written to disk. 332 333 In Cedar Backup v1.0, we assumed that the correct staging directory matched 334 the current date. However, that has problems. In particular, it breaks 335 down if collect is on one side of midnite and stage is on the other, or if 336 certain processes span midnite. 337 338 For v2.0, I'm trying to be smarter. I'll first check the current day. If 339 that directory is found, it's good enough. If it's not found, I'll look for 340 a valid directory from the day before or day after I{which has not yet been 341 staged, according to the stage indicator file}. The first one I find, I'll 342 use. If I use a directory other than for the current day I{and} 343 C{config.store.warnMidnite} is set, a warning will be put in the log. 344 345 There is one exception to this rule. If the C{options.full} flag is set, 346 then the special "span midnite" logic will be disabled and any existing 347 store indicator will be ignored. I did this because I think that most users 348 who run C{cback --full store} twice in a row expect the command to generate 349 two identical discs. With the other rule in place, running that command 350 twice in a row could result in an error ("no unstored directory exists") or 351 could even cause a completely unexpected directory to be written to disc (if 352 some previous day's contents had not yet been written). 353 354 @note: This code is probably longer and more verbose than it needs to be, 355 but at least it's straightforward. 356 357 @param options: Options object. 358 @param config: Config object. 359 360 @return: Correct staging dir, as a dict mapping directory to date suffix. 361 @raise IOError: If the staging directory cannot be found. 362 """ 363 oneDay = datetime.timedelta(days=1) 364 today = datetime.date.today() 365 yesterday = today - oneDay 366 tomorrow = today + oneDay 367 todayDate = today.strftime(DIR_TIME_FORMAT) 368 yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) 369 tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) 370 todayPath = os.path.join(config.stage.targetDir, todayDate) 371 yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) 372 tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) 373 todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) 374 yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) 375 tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) 376 todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) 377 yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) 378 tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) 379 if options.full: 380 if os.path.isdir(todayPath) and os.path.exists(todayStageInd): 381 logger.info("Store process will use current day's stage directory [%s]" % todayPath) 382 return { todayPath:todayDate } 383 raise IOError("Unable to find staging directory to store (only tried today due to full option).") 384 else: 385 if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): 386 logger.info("Store process will use current day's stage directory [%s]" % todayPath) 387 return { todayPath:todayDate } 388 elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): 389 logger.info("Store process will use previous day's stage directory [%s]" % yesterdayPath) 390 if config.store.warnMidnite: 391 logger.warn("Warning: store process crossed midnite boundary to find data.") 392 return { yesterdayPath:yesterdayDate } 393 elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): 394 logger.info("Store process will use next day's stage directory [%s]" % tomorrowPath) 395 if config.store.warnMidnite: 396 logger.warn("Warning: store process crossed midnite boundary to find data.") 397 return { tomorrowPath:tomorrowDate } 398 raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).")
    399

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter-module.html0000664000175000017500000002323412143054362030364 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Module dvdwriter

    source code

    Provides functionality related to DVD writer devices.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Classes [hide private]
      MediaDefinition
    Class encapsulating information about DVD media definitions.
      DvdWriter
    Class representing a device that knows how to write some kinds of DVD media.
      MediaCapacity
    Class encapsulating information about DVD media capacity.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_DVDPLUSR = 1
    Constant representing DVD+R media.
      MEDIA_DVDPLUSRW = 2
    Constant representing DVD+RW media.
      logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter")
      GROWISOFS_COMMAND = ['growisofs']
      EJECT_COMMAND = ['eject']
      __package__ = 'CedarBackup2.writers'
    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2-module.html0000664000175000017500000000216412143054362025437 0ustar pronovicpronovic00000000000000 CedarBackup2

    Module CedarBackup2


    Variables


    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.mbox-module.html0000664000175000017500000000716512143054362027677 0ustar pronovicpronovic00000000000000 mbox

    Module mbox


    Classes

    LocalConfig
    MboxConfig
    MboxDir
    MboxFile

    Functions

    executeAction

    Variables

    GREPMAIL_COMMAND
    REVISION_PATH_EXTENSION
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mbox-module.html0000664000175000017500000012742512143054362027116 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox
    Package CedarBackup2 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Module mbox

    source code

    Provides an extension to back up mbox email files.

    Backing up email

    Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.)

    One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little.

    What is this extension?

    This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the grepmail utility.

    This extension requires a new configuration section <mbox> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MboxFile
    Class representing mbox file configuration..
      MboxDir
    Class representing mbox directory configuration..
      MboxConfig
    Class representing mbox configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the mbox backup action.
    source code
     
    _getCollectMode(local, item)
    Gets the collect mode that should be used for an mbox file or directory.
    source code
     
    _getCompressMode(local, item)
    Gets the compress mode that should be used for an mbox file or directory.
    source code
     
    _getRevisionPath(config, item)
    Gets the path to the revision file associated with a repository.
    source code
     
    _loadLastRevision(config, item, fullBackup, collectMode)
    Loads the last revision date for this item from disk and returns it.
    source code
     
    _writeNewRevision(config, item, newRevision)
    Writes new revision information to disk.
    source code
     
    _getExclusions(mboxDir)
    Gets exclusions (file and patterns) associated with an mbox directory.
    source code
     
    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)
    Gets the backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getTarfilePath(config, mboxPath, compressMode, newRevision)
    Gets the tarfile backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving backup information.
    source code
     
    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)
    Backs up an individual mbox file.
    source code
     
    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)
    Backs up a directory containing mbox files.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.mbox")
      GREPMAIL_COMMAND = ['grepmail']
      REVISION_PATH_EXTENSION = 'mboxlast'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the mbox backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, item)

    source code 

    Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Collect mode to use.

    _getCompressMode(local, item)

    source code 

    Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Compress mode to use.

    _getRevisionPath(config, item)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    Returns:
    Absolute path to the revision file associated with the repository.

    _loadLastRevision(config, item, fullBackup, collectMode)

    source code 

    Loads the last revision date for this item from disk and returns it.

    If this is a full backup, or if the revision file cannot be loaded for some reason, then None is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • fullBackup - Indicates whether this is a full backup
    • collectMode - Indicates the collect mode for this item
    Returns:
    Revision date as a datetime.datetime object or None.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _writeNewRevision(config, item, newRevision)

    source code 

    Writes new revision information to disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • newRevision - Revision date as a datetime.datetime object.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _getExclusions(mboxDir)

    source code 

    Gets exclusions (file and patterns) associated with an mbox directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns.

    Parameters:
    • mboxDir - Mbox directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)

    source code 

    Gets the backup file path (including correct extension) associated with an mbox path.

    We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    • targetDir - Target directory in which the path should exist
    Returns:
    Absolute path to the backup file associated with the repository.

    Note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getTarfilePath(config, mboxPath, compressMode, newRevision)

    source code 

    Gets the tarfile backup file path (including correct extension) associated with an mbox path.

    Along with the path, the tar archive mode is returned in a form that can be used with BackupFileList.generateTarfile.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    Returns:
    Tuple of (absolute path to tarfile, tar archive mode)

    Note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving backup information.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object.

    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)

    source code 

    Backs up an individual mbox file.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox file to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • targetDir - Target directory to write the backed-up file into
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)

    source code 

    Backs up a directory containing mbox files.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox directory to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.PurgeDir-class.html0000664000175000017500000005561312143054362027467 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PurgeDir
    Package CedarBackup2 :: Module config :: Class PurgeDir
    [hide private]
    [frames] | no frames]

    Class PurgeDir

    source code

    object --+
             |
            PurgeDir
    

    Class representing a Cedar Backup purge directory.

    The following restrictions exist on data in this class:

    • The absolute path must be an absolute path
    • The retain days value must be an integer >= 0.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, retainDays=None)
    Constructor for the PurgeDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setRetainDays(self, value)
    Property target used to set the retain days value.
    source code
     
    _getRetainDays(self)
    Property target used to get the absolute path.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of directory to purge.
      retainDays
    Number of days content within directory should be retained.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, retainDays=None)
    (Constructor)

    source code 

    Constructor for the PurgeDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to be purged.
    • retainDays - Number of days content within directory should be retained.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRetainDays(self, value)

    source code 

    Property target used to set the retain days value. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of directory to purge.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    retainDays

    Number of days content within directory should be retained.

    Get Method:
    _getRetainDays(self) - Property target used to get the absolute path.
    Set Method:
    _setRetainDays(self, value) - Property target used to set the retain days value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2-pysrc.html0000664000175000017500000003013012143054365024524 0ustar pronovicpronovic00000000000000 CedarBackup2
    Package CedarBackup2
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides package initialization 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Implements local and remote backups to CD or DVD media. 
    25   
    26  Cedar Backup is a software package designed to manage system backups for a pool 
    27  of local and remote machines.  Cedar Backup understands how to back up 
    28  filesystem data as well as MySQL and PostgreSQL databases and Subversion 
    29  repositories.  It can also be easily extended to support other kinds of data 
    30  sources. 
    31   
    32  Cedar Backup is focused around weekly backups to a single CD or DVD disc, with 
    33  the expectation that the disc will be changed or overwritten at the beginning 
    34  of each week.  If your hardware is new enough, Cedar Backup can write 
    35  multisession discs, allowing you to add incremental data to a disc on a daily 
    36  basis. 
    37   
    38  Besides offering command-line utilities to manage the backup process, Cedar 
    39  Backup provides a well-organized library of backup-related functionality, 
    40  written in the Python programming language. 
    41   
    42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    43  """ 
    44   
    45   
    46  ######################################################################## 
    47  # Package initialization 
    48  ######################################################################## 
    49   
    50  # Using 'from CedarBackup2 import *' will just import the modules listed 
    51  # in the __all__ variable. 
    52   
    53  __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack',  
    54              'peer', 'release', 'tools', 'util', 'writers', ] 
    55   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.mysql.LocalConfig-class.html0000664000175000017500000007540312143054363031314 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql.LocalConfig
    Package CedarBackup2 :: Package extend :: Module mysql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <mysql> configuration section as the next child of a parent.
    source code
     
    _setMysql(self, value)
    Property target used to set the mysql configuration value.
    source code
     
    _getMysql(self)
    Property target used to get the mysql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMysql(parentNode)
    Parses a mysql configuration section.
    source code
    Properties [hide private]
      mysql
    Mysql configuration in terms of a MysqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <mysql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/mysql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMysql(self, value)

    source code 

    Property target used to set the mysql configuration value. If not None, the value must be a MysqlConfig object.

    Raises:
    • ValueError - If the value is not a MysqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mysql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMysql(parentNode)
    Static Method

    source code 

    Parses a mysql configuration section.

    We read the following fields:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/mysql/database
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    MysqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    mysql

    Mysql configuration in terms of a MysqlConfig object.

    Get Method:
    _getMysql(self) - Property target used to get the mysql configuration value.
    Set Method:
    _setMysql(self, value) - Property target used to set the mysql configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.collect-pysrc.html0000664000175000017500000077650012143054364027630 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.collect
    Package CedarBackup2 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.collect

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2011 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: collect.py 1020 2011-10-11 21:47:53Z pronovic $ 
     31  # Purpose  : Implements the standard 'collect' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'collect' action. 
     41  @sort: executeCollect 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import os 
     52  import logging 
     53  import pickle 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup2.filesystem import BackupFileList, FilesystemList 
     57  from CedarBackup2.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath 
     58  from CedarBackup2.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR 
     59  from CedarBackup2.actions.util import writeIndicatorFile 
     60   
     61   
     62  ######################################################################## 
     63  # Module-wide constants and variables 
     64  ######################################################################## 
     65   
     66  logger = logging.getLogger("CedarBackup2.log.actions.collect") 
     67   
     68   
     69  ######################################################################## 
     70  # Public functions 
     71  ######################################################################## 
     72   
     73  ############################ 
     74  # executeCollect() function 
     75  ############################ 
     76   
    
    77 -def executeCollect(configPath, options, config):
    78 """ 79 Executes the collect backup action. 80 81 @note: When the collect action is complete, we will write a collect 82 indicator to the collect directory, so it's obvious that the collect action 83 has completed. The stage process uses this indicator to decide whether a 84 peer is ready to be staged. 85 86 @param configPath: Path to configuration file on disk. 87 @type configPath: String representing a path on disk. 88 89 @param options: Program command-line options. 90 @type options: Options object. 91 92 @param config: Program configuration. 93 @type config: Config object. 94 95 @raise ValueError: Under many generic error conditions 96 @raise TarError: If there is a problem creating a tar file 97 """ 98 logger.debug("Executing the 'collect' action.") 99 if config.options is None or config.collect is None: 100 raise ValueError("Collect configuration is not properly filled in.") 101 if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and 102 (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): 103 raise ValueError("There must be at least one collect file or collect directory.") 104 fullBackup = options.full 105 logger.debug("Full backup flag is [%s]" % fullBackup) 106 todayIsStart = isStartOfWeek(config.options.startingDay) 107 resetDigest = fullBackup or todayIsStart 108 logger.debug("Reset digest flag is [%s]" % resetDigest) 109 if config.collect.collectFiles is not None: 110 for collectFile in config.collect.collectFiles: 111 logger.debug("Working with collect file [%s]" % collectFile.absolutePath) 112 collectMode = _getCollectMode(config, collectFile) 113 archiveMode = _getArchiveMode(config, collectFile) 114 digestPath = _getDigestPath(config, collectFile.absolutePath) 115 tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) 116 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 117 logger.debug("File meets criteria to be backed up today.") 118 _collectFile(config, collectFile.absolutePath, tarfilePath, 119 collectMode, archiveMode, resetDigest, digestPath) 120 else: 121 logger.debug("File will not be backed up, per collect mode.") 122 logger.info("Completed collecting file [%s]" % collectFile.absolutePath) 123 if config.collect.collectDirs is not None: 124 for collectDir in config.collect.collectDirs: 125 logger.debug("Working with collect directory [%s]" % collectDir.absolutePath) 126 collectMode = _getCollectMode(config, collectDir) 127 archiveMode = _getArchiveMode(config, collectDir) 128 ignoreFile = _getIgnoreFile(config, collectDir) 129 linkDepth = _getLinkDepth(collectDir) 130 dereference = _getDereference(collectDir) 131 recursionLevel = _getRecursionLevel(collectDir) 132 (excludePaths, excludePatterns) = _getExclusions(config, collectDir) 133 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 134 logger.debug("Directory meets criteria to be backed up today.") 135 _collectDirectory(config, collectDir.absolutePath, 136 collectMode, archiveMode, ignoreFile, linkDepth, dereference, 137 resetDigest, excludePaths, excludePatterns, recursionLevel) 138 else: 139 logger.debug("Directory will not be backed up, per collect mode.") 140 logger.info("Completed collecting directory [%s]" % collectDir.absolutePath) 141 writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, 142 config.options.backupUser, config.options.backupGroup) 143 logger.info("Executed the 'collect' action successfully.")
    144 145 146 ######################################################################## 147 # Private utility functions 148 ######################################################################## 149 150 ########################## 151 # _collectFile() function 152 ########################## 153
    154 -def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    155 """ 156 Collects a configured collect file. 157 158 The indicated collect file is collected into the indicated tarfile. 159 For files that are collected incrementally, we'll use the indicated 160 digest path and pay attention to the reset digest flag (basically, the reset 161 digest flag ignores any existing digest, but a new digest is always 162 rewritten). 163 164 The caller must decide what the collect and archive modes are, since they 165 can be on both the collect configuration and the collect file itself. 166 167 @param config: Config object. 168 @param absolutePath: Absolute path of file to collect. 169 @param tarfilePath: Path to tarfile that should be created. 170 @param collectMode: Collect mode to use. 171 @param archiveMode: Archive mode to use. 172 @param resetDigest: Reset digest flag. 173 @param digestPath: Path to digest file on disk, if needed. 174 """ 175 backupList = BackupFileList() 176 backupList.addFile(absolutePath) 177 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    178 179 180 ############################### 181 # _collectDirectory() function 182 ############################### 183
    184 -def _collectDirectory(config, absolutePath, collectMode, archiveMode, 185 ignoreFile, linkDepth, dereference, resetDigest, 186 excludePaths, excludePatterns, recursionLevel):
    187 """ 188 Collects a configured collect directory. 189 190 The indicated collect directory is collected into the indicated tarfile. 191 For directories that are collected incrementally, we'll use the indicated 192 digest path and pay attention to the reset digest flag (basically, the reset 193 digest flag ignores any existing digest, but a new digest is always 194 rewritten). 195 196 The caller must decide what the collect and archive modes are, since they 197 can be on both the collect configuration and the collect directory itself. 198 199 @param config: Config object. 200 @param absolutePath: Absolute path of directory to collect. 201 @param collectMode: Collect mode to use. 202 @param archiveMode: Archive mode to use. 203 @param ignoreFile: Ignore file to use. 204 @param linkDepth: Link depth value to use. 205 @param dereference: Dereference flag to use. 206 @param resetDigest: Reset digest flag. 207 @param excludePaths: List of absolute paths to exclude. 208 @param excludePatterns: List of patterns to exclude. 209 @param recursionLevel: Recursion level (zero for no recursion) 210 """ 211 if recursionLevel == 0: 212 # Collect the actual directory because we're at recursion level 0 213 logger.info("Collecting directory [%s]" % absolutePath) 214 tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) 215 digestPath = _getDigestPath(config, absolutePath) 216 217 backupList = BackupFileList() 218 backupList.ignoreFile = ignoreFile 219 backupList.excludePaths = excludePaths 220 backupList.excludePatterns = excludePatterns 221 backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) 222 223 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) 224 else: 225 # Find all of the immediate subdirectories 226 subdirs = FilesystemList() 227 subdirs.excludeFiles = True 228 subdirs.excludeLinks = True 229 subdirs.excludePaths = excludePaths 230 subdirs.excludePatterns = excludePatterns 231 subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) 232 233 # Back up the subdirectories separately 234 for subdir in subdirs: 235 _collectDirectory(config, subdir, collectMode, archiveMode, 236 ignoreFile, linkDepth, dereference, resetDigest, 237 excludePaths, excludePatterns, recursionLevel-1) 238 excludePaths.append(subdir) # this directory is already backed up, so exclude it 239 240 # Back up everything that hasn't previously been backed up 241 _collectDirectory(config, absolutePath, collectMode, archiveMode, 242 ignoreFile, linkDepth, dereference, resetDigest, 243 excludePaths, excludePatterns, 0)
    244 245 246 ############################ 247 # _executeBackup() function 248 ############################ 249
    250 -def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    251 """ 252 Execute the backup process for the indicated backup list. 253 254 This function exists mainly to consolidate functionality between the 255 L{_collectFile} and L{_collectDirectory} functions. Those functions build 256 the backup list; this function causes the backup to execute properly and 257 also manages usage of the digest file on disk as explained in their 258 comments. 259 260 For collect files, the digest file will always just contain the single file 261 that is being backed up. This might little wasteful in terms of the number 262 of files that we keep around, but it's consistent and easy to understand. 263 264 @param config: Config object. 265 @param backupList: List to execute backup for 266 @param absolutePath: Absolute path of directory or file to collect. 267 @param tarfilePath: Path to tarfile that should be created. 268 @param collectMode: Collect mode to use. 269 @param archiveMode: Archive mode to use. 270 @param resetDigest: Reset digest flag. 271 @param digestPath: Path to digest file on disk, if needed. 272 """ 273 if collectMode != 'incr': 274 logger.debug("Collect mode is [%s]; no digest will be used." % collectMode) 275 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 276 logger.info("Backing up file [%s] (%s)." % (absolutePath, displayBytes(backupList.totalSize()))) 277 else: 278 logger.info("Backing up %d files in [%s] (%s)." % (len(backupList), absolutePath, displayBytes(backupList.totalSize()))) 279 if len(backupList) > 0: 280 backupList.generateTarfile(tarfilePath, archiveMode, True) 281 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 282 else: 283 if resetDigest: 284 logger.debug("Based on resetDigest flag, digest will be cleared.") 285 oldDigest = {} 286 else: 287 logger.debug("Based on resetDigest flag, digest will loaded from disk.") 288 oldDigest = _loadDigest(digestPath) 289 (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) 290 logger.debug("Removed %d unchanged files based on digest values." % removed) 291 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 292 logger.info("Backing up file [%s] (%s)." % (absolutePath, displayBytes(backupList.totalSize()))) 293 else: 294 logger.info("Backing up %d files in [%s] (%s)." % (len(backupList), absolutePath, displayBytes(backupList.totalSize()))) 295 if len(backupList) > 0: 296 backupList.generateTarfile(tarfilePath, archiveMode, True) 297 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 298 _writeDigest(config, newDigest, digestPath)
    299 300 301 ######################### 302 # _loadDigest() function 303 ######################### 304
    305 -def _loadDigest(digestPath):
    306 """ 307 Loads the indicated digest path from disk into a dictionary. 308 309 If we can't load the digest successfully (either because it doesn't exist or 310 for some other reason), then an empty dictionary will be returned - but the 311 condition will be logged. 312 313 @param digestPath: Path to the digest file on disk. 314 315 @return: Dictionary representing contents of digest path. 316 """ 317 if not os.path.isfile(digestPath): 318 digest = {} 319 logger.debug("Digest [%s] does not exist on disk." % digestPath) 320 else: 321 try: 322 digest = pickle.load(open(digestPath, "r")) 323 logger.debug("Loaded digest [%s] from disk: %d entries." % (digestPath, len(digest))) 324 except: 325 digest = {} 326 logger.error("Failed loading digest [%s] from disk." % digestPath) 327 return digest
    328 329 330 ########################## 331 # _writeDigest() function 332 ########################## 333
    334 -def _writeDigest(config, digest, digestPath):
    335 """ 336 Writes the digest dictionary to the indicated digest path on disk. 337 338 If we can't write the digest successfully for any reason, we'll log the 339 condition but won't throw an exception. 340 341 @param config: Config object. 342 @param digest: Digest dictionary to write to disk. 343 @param digestPath: Path to the digest file on disk. 344 """ 345 try: 346 pickle.dump(digest, open(digestPath, "w")) 347 changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) 348 logger.debug("Wrote new digest [%s] to disk: %d entries." % (digestPath, len(digest))) 349 except: 350 logger.error("Failed to write digest [%s] to disk." % digestPath)
    351 352 353 ######################################################################## 354 # Private attribute "getter" functions 355 ######################################################################## 356 357 ############################ 358 # getCollectMode() function 359 ############################ 360
    361 -def _getCollectMode(config, item):
    362 """ 363 Gets the collect mode that should be used for a collect directory or file. 364 If possible, use the one on the file or directory, otherwise take from collect section. 365 @param config: Config object. 366 @param item: C{CollectFile} or C{CollectDir} object 367 @return: Collect mode to use. 368 """ 369 if item.collectMode is None: 370 collectMode = config.collect.collectMode 371 else: 372 collectMode = item.collectMode 373 logger.debug("Collect mode is [%s]" % collectMode) 374 return collectMode
    375 376 377 ############################# 378 # _getArchiveMode() function 379 ############################# 380
    381 -def _getArchiveMode(config, item):
    382 """ 383 Gets the archive mode that should be used for a collect directory or file. 384 If possible, use the one on the file or directory, otherwise take from collect section. 385 @param config: Config object. 386 @param item: C{CollectFile} or C{CollectDir} object 387 @return: Archive mode to use. 388 """ 389 if item.archiveMode is None: 390 archiveMode = config.collect.archiveMode 391 else: 392 archiveMode = item.archiveMode 393 logger.debug("Archive mode is [%s]" % archiveMode) 394 return archiveMode
    395 396 397 ############################ 398 # _getIgnoreFile() function 399 ############################ 400
    401 -def _getIgnoreFile(config, item):
    402 """ 403 Gets the ignore file that should be used for a collect directory or file. 404 If possible, use the one on the file or directory, otherwise take from collect section. 405 @param config: Config object. 406 @param item: C{CollectFile} or C{CollectDir} object 407 @return: Ignore file to use. 408 """ 409 if item.ignoreFile is None: 410 ignoreFile = config.collect.ignoreFile 411 else: 412 ignoreFile = item.ignoreFile 413 logger.debug("Ignore file is [%s]" % ignoreFile) 414 return ignoreFile
    415 416 417 ############################ 418 # _getLinkDepth() function 419 ############################ 420
    421 -def _getLinkDepth(item):
    422 """ 423 Gets the link depth that should be used for a collect directory. 424 If possible, use the one on the directory, otherwise set a value of 0 (zero). 425 @param item: C{CollectDir} object 426 @return: Link depth to use. 427 """ 428 if item.linkDepth is None: 429 linkDepth = 0 430 else: 431 linkDepth = item.linkDepth 432 logger.debug("Link depth is [%d]" % linkDepth) 433 return linkDepth
    434 435 436 ############################ 437 # _getDereference() function 438 ############################ 439
    440 -def _getDereference(item):
    441 """ 442 Gets the dereference flag that should be used for a collect directory. 443 If possible, use the one on the directory, otherwise set a value of False. 444 @param item: C{CollectDir} object 445 @return: Dereference flag to use. 446 """ 447 if item.dereference is None: 448 dereference = False 449 else: 450 dereference = item.dereference 451 logger.debug("Dereference flag is [%s]" % dereference) 452 return dereference
    453 454 455 ################################ 456 # _getRecursionLevel() function 457 ################################ 458
    459 -def _getRecursionLevel(item):
    460 """ 461 Gets the recursion level that should be used for a collect directory. 462 If possible, use the one on the directory, otherwise set a value of 0 (zero). 463 @param item: C{CollectDir} object 464 @return: Recursion level to use. 465 """ 466 if item.recursionLevel is None: 467 recursionLevel = 0 468 else: 469 recursionLevel = item.recursionLevel 470 logger.debug("Recursion level is [%d]" % recursionLevel) 471 return recursionLevel
    472 473 474 ############################ 475 # _getDigestPath() function 476 ############################ 477
    478 -def _getDigestPath(config, absolutePath):
    479 """ 480 Gets the digest path associated with a collect directory or file. 481 @param config: Config object. 482 @param absolutePath: Absolute path to generate digest for 483 @return: Absolute path to the digest associated with the collect directory or file. 484 """ 485 normalized = buildNormalizedPath(absolutePath) 486 filename = "%s.%s" % (normalized, DIGEST_EXTENSION) 487 digestPath = os.path.join(config.options.workingDir, filename) 488 logger.debug("Digest path is [%s]" % digestPath) 489 return digestPath
    490 491 492 ############################# 493 # _getTarfilePath() function 494 ############################# 495
    496 -def _getTarfilePath(config, absolutePath, archiveMode):
    497 """ 498 Gets the tarfile path (including correct extension) associated with a collect directory. 499 @param config: Config object. 500 @param absolutePath: Absolute path to generate tarfile for 501 @param archiveMode: Archive mode to use for this tarfile. 502 @return: Absolute path to the tarfile associated with the collect directory. 503 """ 504 if archiveMode == 'tar': 505 extension = "tar" 506 elif archiveMode == 'targz': 507 extension = "tar.gz" 508 elif archiveMode == 'tarbz2': 509 extension = "tar.bz2" 510 normalized = buildNormalizedPath(absolutePath) 511 filename = "%s.%s" % (normalized, extension) 512 tarfilePath = os.path.join(config.collect.targetDir, filename) 513 logger.debug("Tarfile path is [%s]" % tarfilePath) 514 return tarfilePath
    515 516 517 ############################ 518 # _getExclusions() function 519 ############################ 520
    521 -def _getExclusions(config, collectDir):
    522 """ 523 Gets exclusions (file and patterns) associated with a collect directory. 524 525 The returned files value is a list of absolute paths to be excluded from the 526 backup for a given directory. It is derived from the collect configuration 527 absolute exclude paths and the collect directory's absolute and relative 528 exclude paths. 529 530 The returned patterns value is a list of patterns to be excluded from the 531 backup for a given directory. It is derived from the list of patterns from 532 the collect configuration and from the collect directory itself. 533 534 @param config: Config object. 535 @param collectDir: Collect directory object. 536 537 @return: Tuple (files, patterns) indicating what to exclude. 538 """ 539 paths = [] 540 if config.collect.absoluteExcludePaths is not None: 541 paths.extend(config.collect.absoluteExcludePaths) 542 if collectDir.absoluteExcludePaths is not None: 543 paths.extend(collectDir.absoluteExcludePaths) 544 if collectDir.relativeExcludePaths is not None: 545 for relativePath in collectDir.relativeExcludePaths: 546 paths.append(os.path.join(collectDir.absolutePath, relativePath)) 547 patterns = [] 548 if config.collect.excludePatterns is not None: 549 patterns.extend(config.collect.excludePatterns) 550 if collectDir.excludePatterns is not None: 551 patterns.extend(collectDir.excludePatterns) 552 logger.debug("Exclude paths: %s" % paths) 553 logger.debug("Exclude patterns: %s" % patterns) 554 return(paths, patterns)
    555

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion-module.html0000664000175000017500000013722412143054362030346 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion
    Package CedarBackup2 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Module subversion

    source code

    Provides an extension to back up Subversion repositories.

    This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental.

    This extension requires a new configuration section <subversion> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      RepositoryDir
    Class representing Subversion repository directory.
      Repository
    Class representing generic Subversion repository configuration..
      SubversionConfig
    Class representing Subversion configuration.
      LocalConfig
    Class representing this extension's configuration document.
      BDBRepository
    Class representing Subversion BDB (Berkeley Database) repository configuration.
      FSFSRepository
    Class representing Subversion FSFS repository configuration.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the Subversion backup action.
    source code
     
    _getCollectMode(local, repository)
    Gets the collect mode that should be used for a repository.
    source code
     
    _getCompressMode(local, repository)
    Gets the compress mode that should be used for a repository.
    source code
     
    _getRevisionPath(config, repository)
    Gets the path to the revision file associated with a repository.
    source code
     
    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)
    Gets the backup file path (including correct extension) associated with a repository.
    source code
     
    _getRepositoryPaths(repositoryDir)
    Gets a list of child repository paths within a repository directory.
    source code
     
    _getExclusions(repositoryDir)
    Gets exclusions (file and patterns) associated with an repository directory.
    source code
     
    _backupRepository(config, local, todayIsStart, fullBackup, repository)
    Backs up an individual Subversion repository.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving the Subversion dump.
    source code
     
    _loadLastRevision(revisionPath)
    Loads the indicated revision file from disk into an integer.
    source code
     
    _writeLastRevision(config, revisionPath, endRevision)
    Writes the end revision to the indicated revision file on disk.
    source code
     
    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion repository.
    source code
     
    getYoungestRevision(repositoryPath)
    Gets the youngest (newest) revision in a Subversion repository using svnlook.
    source code
     
    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion BDB repository.
    source code
     
    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion FSFS repository.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.subversion")
      SVNLOOK_COMMAND = ['svnlook']
      SVNADMIN_COMMAND = ['svnadmin']
      REVISION_PATH_EXTENSION = 'svnlast'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the Subversion backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, repository)

    source code 

    Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • repository - Repository object.
    Returns:
    Collect mode to use.

    _getCompressMode(local, repository)

    source code 

    Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • local - LocalConfig object.
    • repository - Repository object.
    Returns:
    Compress mode to use.

    _getRevisionPath(config, repository)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Config object.
    • repository - Repository object.
    Returns:
    Absolute path to the revision file associated with the repository.

    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)

    source code 

    Gets the backup file path (including correct extension) associated with a repository.

    Parameters:
    • config - Config object.
    • repositoryPath - Path to the indicated repository
    • compressMode - Compress mode to use for this repository.
    • startRevision - Starting repository revision.
    • endRevision - Ending repository revision.
    Returns:
    Absolute path to the backup file associated with the repository.

    _getRepositoryPaths(repositoryDir)

    source code 

    Gets a list of child repository paths within a repository directory.

    Parameters:
    • repositoryDir - RepositoryDirectory

    _getExclusions(repositoryDir)

    source code 

    Gets exclusions (file and patterns) associated with an repository directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns.

    Parameters:
    • repositoryDir - Repository directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _backupRepository(config, local, todayIsStart, fullBackup, repository)

    source code 

    Backs up an individual Subversion repository.

    This internal method wraps the public methods and adds some functionality to work better with the extended action itself.

    Parameters:
    • config - Cedar Backup configuration.
    • local - Local configuration
    • todayIsStart - Indicates whether today is start of week
    • fullBackup - Full backup flag
    • repository - Repository to operate on
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving the Subversion dump.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object.

    _loadLastRevision(revisionPath)

    source code 

    Loads the indicated revision file from disk into an integer.

    If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups.

    Parameters:
    • revisionPath - Path to the revision file on disk.
    Returns:
    Integer representing last backed-up revision, -1 on error or if none can be read.

    _writeLastRevision(config, revisionPath, endRevision)

    source code 

    Writes the end revision to the indicated revision file on disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • revisionPath - Path to the revision file on disk.
    • endRevision - Last revision backed up on this run.

    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion repository.

    The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open, but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to back up
    • backupFile (Python file object as from open() or file().) - Python file object to use for writing backup.
    • startRevision (Integer value >= 0.) - Starting repository revision to back up (for incremental backups)
    • endRevision (Integer value >= 0.) - Ending repository revision to back up (for incremental backups)
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.
    Notes:
    • This function should either be run as root or as the owner of the Subversion repository.
    • It is apparently not a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using svnadmin recover.

    getYoungestRevision(repositoryPath)

    source code 

    Gets the youngest (newest) revision in a Subversion repository using svnlook.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to look in.
    Returns:
    Youngest revision as an integer.
    Raises:
    • ValueError - If there is a problem parsing the svnlook output.
    • IOError - If there is a problem executing the svnlook command.

    Note: This function should either be run as root or as the owner of the Subversion repository.

    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion BDB repository. This function is deprecated. Use backupRepository instead.

    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion FSFS repository. This function is deprecated. Use backupRepository instead.


    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.CommandOverride-class.html0000664000175000017500000005614112143054362031021 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CommandOverride
    Package CedarBackup2 :: Module config :: Class CommandOverride
    [hide private]
    [frames] | no frames]

    Class CommandOverride

    source code

    object --+
             |
            CommandOverride
    

    Class representing a piece of Cedar Backup command override configuration.

    The following restrictions exist on data in this class:

    • The absolute path must be absolute

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, command=None, absolutePath=None)
    Constructor for the CommandOverride class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      command
    Name of command to be overridden.
      absolutePath
    Absolute path of the overrridden command.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, command=None, absolutePath=None)
    (Constructor)

    source code 

    Constructor for the CommandOverride class.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    Property Details [hide private]

    command

    Name of command to be overridden.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    absolutePath

    Absolute path of the overrridden command.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ActionHook-class.html0000664000175000017500000006453112143054362030003 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ActionHook
    Package CedarBackup2 :: Module config :: Class ActionHook
    [hide private]
    [frames] | no frames]

    Class ActionHook

    source code

    object --+
             |
            ActionHook
    
    Known Subclasses:

    Class representing a hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string matching ACTION_NAME_REGEX
    • The shell command must be a non-empty string.

    The internal before and after instance variables are always set to False in this parent class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the ActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAction(self, value)
    Property target used to set the action name.
    source code
     
    _getAction(self)
    Property target used to get the action name.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _getBefore(self)
    Property target used to get the before flag.
    source code
     
    _getAfter(self)
    Property target used to get the after flag.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      action
    Action this hook is associated with.
      command
    Shell command to execute.
      before
    Indicates whether command should be executed before action.
      after
    Indicates whether command should be executed after action.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the ActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAction(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    action

    Action this hook is associated with.

    Get Method:
    _getAction(self) - Property target used to get the action name.
    Set Method:
    _setAction(self, value) - Property target used to set the action name.

    command

    Shell command to execute.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    before

    Indicates whether command should be executed before action.

    Get Method:
    _getBefore(self) - Property target used to get the before flag.

    after

    Indicates whether command should be executed after action.

    Get Method:
    _getAfter(self) - Property target used to get the after flag.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.peer.RemotePeer-class.html0000664000175000017500000027044212143054363027503 0ustar pronovicpronovic00000000000000 CedarBackup2.peer.RemotePeer
    Package CedarBackup2 :: Module peer :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Backup peer representing a remote peer in a backup pool.

    This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command).

    You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will su to the local user and execute the remote copies as that user.

    The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method.

    The public methods other than the constructor are part of a "backup peer" interface shared with the LocalPeer class.

    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    Initializes a remote backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    executeRemoteCommand(self, command)
    Executes a command on the peer via remote shell.
    source code
     
    executeManagedAction(self, action, fullBackup)
    Executes a managed action on this peer.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setLocalUser(self, value)
    Property target used to set the local user.
    source code
     
    _getLocalUser(self)
    Property target used to get the local user.
    source code
     
    _setRcpCommand(self, value)
    Property target to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getDirContents(path)
    Returns the contents of a directory in terms of a Set.
    source code
     
    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Copies a remote source file to a target file.
    source code
     
    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Copies a local source file to a remote host.
    source code
     
    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Executes a command on the peer via remote shell.
    source code
     
    _buildCbackCommand(cbackCommand, action, fullBackup)
    Builds a Cedar Backup command line for the named action.
    source code
    Properties [hide private]
      name
    Name of the peer (a valid DNS hostname).
      collectDir
    Path to the peer's collect directory (an absolute local path).
      remoteUser
    Name of the Cedar Backup user on the remote peer.
      rcpCommand
    An rcp-compatible copy command to use for copying files.
      rshCommand
    An rsh-compatible command to use for remote shells to the peer.
      cbackCommand
    A chack-compatible command to use for executing managed actions.
      workingDir
    Path to the peer's working directory (an absolute local path).
      localUser
    Name of the Cedar Backup user on the current host.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a remote backup peer.

    Parameters:
    • name (String, must be a valid DNS hostname) - Name of the backup peer
    • collectDir (String representing an absolute path on the remote peer) - Path to the peer's collect directory
    • workingDir (String representing an absolute path on the current host.) - Working directory that can be used to create temporary files, etc.
    • remoteUser (String representing a username, valid via remote shell to the peer) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • cbackCommand (String representing a system command including required arguments) - A chack-compatible command to use for executing managed actions
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If collect directory is not an absolute path
    Overrides: object.__init__

    Note: If provided, each command will eventually be parsed into a list of strings suitable for passing to util.executeCommand in order to avoid security holes related to shell interpolation. This parsing will be done by the util.splitCommandLine function. See the documentation for that function for some important notes about its limitations.

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return False as if the file weren't there.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted.

    writeStageIndicator(self, stageIndicator=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    Raises:
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    executeRemoteCommand(self, command)

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • command (String command-line suitable for use with rsh.) - Command to execute
    Raises:
    • IOError - If there is an error executing the command on the remote peer.

    executeManagedAction(self, action, fullBackup)

    source code 

    Executes a managed action on this peer.

    Parameters:
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Raises:
    • IOError - If there is an error executing the action on the remote peer.

    _getDirContents(path)
    Static Method

    source code 

    Returns the contents of a directory in terms of a Set.

    The directory's contents are read as a FilesystemList containing only files, and then the list is converted into a set object for later use.

    Parameters:
    • path (String representing a path on disk) - Directory path to get contents for
    Returns:
    Set of files in the directory
    Raises:
    • ValueError - If path is not a directory or does not exist.

    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • IOError - If there is an IO error copying the files.
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError if we don't copy any files from the remote host.

    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a remote source file to a target file.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError the target file does not exist when we're done.

    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Static Method

    source code 

    Copies a local source file to a remote host.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path and cannot be None.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setLocalUser(self, value)

    source code 

    Property target used to set the local user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target to set the rcp command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RCP_COMMAND). Internally, we should always use self._rcpCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target to set the rsh command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RSH_COMMAND). Internally, we should always use self._rshCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target to set the cback command.

    The value must be a non-empty string or None. Unlike the other command, this value is only stored in the "raw" form provided by the client.

    Raises:
    • ValueError - If the value is an empty string.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Static Method

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • remoteUser (String representing a username, valid on the remote host) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • rshCommandList (Command as a list to be passed to util.executeCommand) - An rsh-compatible copy command to use for remote shells to the peer
    • remoteCommand (String command-line, with no special shell characters ($, <, etc.)) - The command to be executed on the remote host
    Raises:
    • IOError - If there is an error executing the remote command

    _buildCbackCommand(cbackCommand, action, fullBackup)
    Static Method

    source code 

    Builds a Cedar Backup command line for the named action.

    Parameters:
    • cbackCommand - cback command to execute, including required options
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Returns:
    String suitable for passing to _executeRemoteCommand as remoteCommand.
    Raises:
    • ValueError - If action is None.

    Note: If the cback command is None, then DEF_CBACK_COMMAND is used.


    Property Details [hide private]

    name

    Name of the peer (a valid DNS hostname).

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of the Cedar Backup user on the remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    An rcp-compatible copy command to use for copying files.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target to set the rcp command.

    rshCommand

    An rsh-compatible command to use for remote shells to the peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target to set the rsh command.

    cbackCommand

    A chack-compatible command to use for executing managed actions.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target to set the cback command.

    workingDir

    Path to the peer's working directory (an absolute local path).

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    localUser

    Name of the Cedar Backup user on the current host.

    Get Method:
    _getLocalUser(self) - Property target used to get the local user.
    Set Method:
    _setLocalUser(self, value) - Property target used to set the local user.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.RegexList-class.html0000664000175000017500000003554512143054363027367 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RegexList
    Package CedarBackup2 :: Module util :: Class RegexList
    [hide private]
    [frames] | no frames]

    Class RegexList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexList
    

    Class representing a list of valid regular expression strings.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is a valid regular expression.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.SubversionConfig-class.html0000664000175000017500000010152612143054363033447 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.SubversionConfig
    Package CedarBackup2 :: Package extend :: Module subversion :: Class SubversionConfig
    [hide private]
    [frames] | no frames]

    Class SubversionConfig

    source code

    object --+
             |
            SubversionConfig
    

    Class representing Subversion configuration.

    Subversion configuration is used for backing up Subversion repositories.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The repositories list must be a list of Repository objects.
    • The repositoryDirs list must be a list of RepositoryDir objects.

    For the two lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has the correct type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    Constructor for the SubversionConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRepositories(self, value)
    Property target used to set the repositories list.
    source code
     
    _getRepositories(self)
    Property target used to get the repositories list.
    source code
     
    _setRepositoryDirs(self, value)
    Property target used to set the repositoryDirs list.
    source code
     
    _getRepositoryDirs(self)
    Property target used to get the repositoryDirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      repositories
    List of Subversion repositories to back up.
      repositoryDirs
    List of Subversion parent directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    (Constructor)

    source code 

    Constructor for the SubversionConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • repositories - List of Subversion repositories to back up.
    • repositoryDirs - List of Subversion parent directories to back up.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRepositories(self, value)

    source code 

    Property target used to set the repositories list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    _setRepositoryDirs(self, value)

    source code 

    Property target used to set the repositoryDirs list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositories

    List of Subversion repositories to back up.

    Get Method:
    _getRepositories(self) - Property target used to get the repositories list.
    Set Method:
    _setRepositories(self, value) - Property target used to set the repositories list.

    repositoryDirs

    List of Subversion parent directories to back up.

    Get Method:
    _getRepositoryDirs(self) - Property target used to get the repositoryDirs list.
    Set Method:
    _setRepositoryDirs(self, value) - Property target used to set the repositoryDirs list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.util-pysrc.html0000664000175000017500000061643412143054366027220 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util
    Package CedarBackup2 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: util.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Provides utilities related to image writers. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides utilities related to image writers. 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import re 
     52  import logging 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup2.util import resolveCommand, executeCommand 
     56  from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath 
     57   
     58   
     59  ######################################################################## 
     60  # Module-wide constants and variables 
     61  ######################################################################## 
     62   
     63  logger = logging.getLogger("CedarBackup2.log.writers.util") 
     64   
     65  MKISOFS_COMMAND      = [ "mkisofs", ] 
     66  VOLNAME_COMMAND      = [ "volname", ] 
    
    67 68 69 ######################################################################## 70 # Functions used to portably validate certain kinds of values 71 ######################################################################## 72 73 ############################ 74 # validateDevice() function 75 ############################ 76 77 -def validateDevice(device, unittest=False):
    78 """ 79 Validates a configured device. 80 The device must be an absolute path, must exist, and must be writable. 81 The unittest flag turns off validation of the device on disk. 82 @param device: Filesystem device path. 83 @param unittest: Indicates whether we're unit testing. 84 @return: Device as a string, for instance C{"/dev/cdrw"} 85 @raise ValueError: If the device value is invalid. 86 @raise ValueError: If some path cannot be encoded properly. 87 """ 88 if device is None: 89 raise ValueError("Device must be filled in.") 90 device = encodePath(device) 91 if not os.path.isabs(device): 92 raise ValueError("Backup device must be an absolute path.") 93 if not unittest and not os.path.exists(device): 94 raise ValueError("Backup device must exist on disk.") 95 if not unittest and not os.access(device, os.W_OK): 96 raise ValueError("Backup device is not writable by the current user.") 97 return device
    98
    99 100 ############################ 101 # validateScsiId() function 102 ############################ 103 104 -def validateScsiId(scsiId):
    105 """ 106 Validates a SCSI id string. 107 SCSI id must be a string in the form C{[<method>:]scsibus,target,lun}. 108 For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. 109 @note: For consistency, if C{None} is passed in, C{None} will be returned. 110 @param scsiId: SCSI id for the device. 111 @return: SCSI id as a string, for instance C{"ATA:1,0,0"} 112 @raise ValueError: If the SCSI id string is invalid. 113 """ 114 if scsiId is not None: 115 pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") 116 if not pattern.search(scsiId): 117 pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") 118 if not pattern.search(scsiId): 119 raise ValueError("SCSI id is not in a valid form.") 120 return scsiId
    121
    122 123 ################################ 124 # validateDriveSpeed() function 125 ################################ 126 127 -def validateDriveSpeed(driveSpeed):
    128 """ 129 Validates a drive speed value. 130 Drive speed must be an integer which is >= 1. 131 @note: For consistency, if C{None} is passed in, C{None} will be returned. 132 @param driveSpeed: Speed at which the drive writes. 133 @return: Drive speed as an integer 134 @raise ValueError: If the drive speed value is invalid. 135 """ 136 if driveSpeed is None: 137 return None 138 try: 139 intSpeed = int(driveSpeed) 140 except TypeError: 141 raise ValueError("Drive speed must be an integer >= 1.") 142 if intSpeed < 1: 143 raise ValueError("Drive speed must an integer >= 1.") 144 return intSpeed
    145
    146 147 ######################################################################## 148 # General writer-related utility functions 149 ######################################################################## 150 151 ############################ 152 # readMediaLabel() function 153 ############################ 154 155 -def readMediaLabel(devicePath):
    156 """ 157 Reads the media label (volume name) from the indicated device. 158 The volume name is read using the C{volname} command. 159 @param devicePath: Device path to read from 160 @return: Media label as a string, or None if there is no name or it could not be read. 161 """ 162 args = [ devicePath, ] 163 command = resolveCommand(VOLNAME_COMMAND) 164 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 165 if result != 0: 166 return None 167 if output is None or len(output) < 1: 168 return None 169 return output[0].rstrip()
    170
    171 172 ######################################################################## 173 # IsoImage class definition 174 ######################################################################## 175 176 -class IsoImage(object):
    177 178 ###################### 179 # Class documentation 180 ###################### 181 182 """ 183 Represents an ISO filesystem image. 184 185 Summary 186 ======= 187 188 This object represents an ISO 9660 filesystem image. It is implemented 189 in terms of the C{mkisofs} program, which has been ported to many 190 operating systems and platforms. A "sensible subset" of the C{mkisofs} 191 functionality is made available through the public interface, allowing 192 callers to set a variety of basic options such as publisher id, 193 application id, etc. as well as specify exactly which files and 194 directories they want included in their image. 195 196 By default, the image is created using the Rock Ridge protocol (using the 197 C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more 198 useful on UN*X filesystems than standard ISO 9660 images. However, 199 callers can fall back to the default C{mkisofs} functionality by setting 200 the C{useRockRidge} instance variable to C{False}. Note, however, that 201 this option is not well-tested. 202 203 Where Files and Directories are Placed in the Image 204 =================================================== 205 206 Although this class is implemented in terms of the C{mkisofs} program, 207 its standard "image contents" semantics are slightly different than the original 208 C{mkisofs} semantics. The difference is that files and directories are 209 added to the image with some additional information about their source 210 directory kept intact. 211 212 As an example, suppose you add the file C{/etc/profile} to your image and 213 you do not configure a graft point. The file C{/profile} will be created 214 in the image. The behavior for directories is similar. For instance, 215 suppose that you add C{/etc/X11} to the image and do not configure a 216 graft point. In this case, the directory C{/X11} will be created in the 217 image, even if the original C{/etc/X11} directory is empty. I{This 218 behavior differs from the standard C{mkisofs} behavior!} 219 220 If a graft point is configured, it will be used to modify the point at 221 which a file or directory is added into an image. Using the examples 222 from above, let's assume you set a graft point of C{base} when adding 223 C{/etc/profile} and C{/etc/X11} to your image. In this case, the file 224 C{/base/profile} and the directory C{/base/X11} would be added to the 225 image. 226 227 I feel that this behavior is more consistent than the original C{mkisofs} 228 behavior. However, to be fair, it is not quite as flexible, and some 229 users might not like it. For this reason, the C{contentsOnly} parameter 230 to the L{addEntry} method can be used to revert to the original behavior 231 if desired. 232 233 @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, 234 _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, 235 device, boundaries, graftPoint, useRockRidge, applicationId, 236 biblioFile, publisherId, preparerId, volumeId 237 """ 238 239 ############## 240 # Constructor 241 ############## 242
    243 - def __init__(self, device=None, boundaries=None, graftPoint=None):
    244 """ 245 Initializes an empty ISO image object. 246 247 Only the most commonly-used configuration items can be set using this 248 constructor. If you have a need to change the others, do so immediately 249 after creating your object. 250 251 The device and boundaries values are both required in order to write 252 multisession discs. If either is missing or C{None}, a multisession disc 253 will not be written. The boundaries tuple is in terms of ISO sectors, as 254 built by an image writer class and returned in a L{writer.MediaCapacity} 255 object. 256 257 @param device: Name of the device that the image will be written to 258 @type device: Either be a filesystem path or a SCSI address 259 260 @param boundaries: Session boundaries as required by C{mkisofs} 261 @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} 262 263 @param graftPoint: Default graft point for this page. 264 @type graftPoint: String representing a graft point path (see L{addEntry}). 265 """ 266 self._device = None 267 self._boundaries = None 268 self._graftPoint = None 269 self._useRockRidge = True 270 self._applicationId = None 271 self._biblioFile = None 272 self._publisherId = None 273 self._preparerId = None 274 self._volumeId = None 275 self.entries = { } 276 self.device = device 277 self.boundaries = boundaries 278 self.graftPoint = graftPoint 279 self.useRockRidge = True 280 self.applicationId = None 281 self.biblioFile = None 282 self.publisherId = None 283 self.preparerId = None 284 self.volumeId = None 285 logger.debug("Created new ISO image object.")
    286 287 288 ############# 289 # Properties 290 ############# 291
    292 - def _setDevice(self, value):
    293 """ 294 Property target used to set the device value. 295 If not C{None}, the value can be either an absolute path or a SCSI id. 296 @raise ValueError: If the value is not valid 297 """ 298 try: 299 if value is None: 300 self._device = None 301 else: 302 if os.path.isabs(value): 303 self._device = value 304 else: 305 self._device = validateScsiId(value) 306 except ValueError: 307 raise ValueError("Device must either be an absolute path or a valid SCSI id.")
    308
    309 - def _getDevice(self):
    310 """ 311 Property target used to get the device value. 312 """ 313 return self._device
    314
    315 - def _setBoundaries(self, value):
    316 """ 317 Property target used to set the boundaries tuple. 318 If not C{None}, the value must be a tuple of two integers. 319 @raise ValueError: If the tuple values are not integers. 320 @raise IndexError: If the tuple does not contain enough elements. 321 """ 322 if value is None: 323 self._boundaries = None 324 else: 325 self._boundaries = (int(value[0]), int(value[1]))
    326
    327 - def _getBoundaries(self):
    328 """ 329 Property target used to get the boundaries value. 330 """ 331 return self._boundaries
    332
    333 - def _setGraftPoint(self, value):
    334 """ 335 Property target used to set the graft point. 336 The value must be a non-empty string if it is not C{None}. 337 @raise ValueError: If the value is an empty string. 338 """ 339 if value is not None: 340 if len(value) < 1: 341 raise ValueError("The graft point must be a non-empty string.") 342 self._graftPoint = value
    343
    344 - def _getGraftPoint(self):
    345 """ 346 Property target used to get the graft point. 347 """ 348 return self._graftPoint
    349
    350 - def _setUseRockRidge(self, value):
    351 """ 352 Property target used to set the use RockRidge flag. 353 No validations, but we normalize the value to C{True} or C{False}. 354 """ 355 if value: 356 self._useRockRidge = True 357 else: 358 self._useRockRidge = False
    359
    360 - def _getUseRockRidge(self):
    361 """ 362 Property target used to get the use RockRidge flag. 363 """ 364 return self._useRockRidge
    365
    366 - def _setApplicationId(self, value):
    367 """ 368 Property target used to set the application id. 369 The value must be a non-empty string if it is not C{None}. 370 @raise ValueError: If the value is an empty string. 371 """ 372 if value is not None: 373 if len(value) < 1: 374 raise ValueError("The application id must be a non-empty string.") 375 self._applicationId = value
    376
    377 - def _getApplicationId(self):
    378 """ 379 Property target used to get the application id. 380 """ 381 return self._applicationId
    382
    383 - def _setBiblioFile(self, value):
    384 """ 385 Property target used to set the biblio file. 386 The value must be a non-empty string if it is not C{None}. 387 @raise ValueError: If the value is an empty string. 388 """ 389 if value is not None: 390 if len(value) < 1: 391 raise ValueError("The biblio file must be a non-empty string.") 392 self._biblioFile = value
    393
    394 - def _getBiblioFile(self):
    395 """ 396 Property target used to get the biblio file. 397 """ 398 return self._biblioFile
    399
    400 - def _setPublisherId(self, value):
    401 """ 402 Property target used to set the publisher id. 403 The value must be a non-empty string if it is not C{None}. 404 @raise ValueError: If the value is an empty string. 405 """ 406 if value is not None: 407 if len(value) < 1: 408 raise ValueError("The publisher id must be a non-empty string.") 409 self._publisherId = value
    410
    411 - def _getPublisherId(self):
    412 """ 413 Property target used to get the publisher id. 414 """ 415 return self._publisherId
    416
    417 - def _setPreparerId(self, value):
    418 """ 419 Property target used to set the preparer id. 420 The value must be a non-empty string if it is not C{None}. 421 @raise ValueError: If the value is an empty string. 422 """ 423 if value is not None: 424 if len(value) < 1: 425 raise ValueError("The preparer id must be a non-empty string.") 426 self._preparerId = value
    427
    428 - def _getPreparerId(self):
    429 """ 430 Property target used to get the preparer id. 431 """ 432 return self._preparerId
    433
    434 - def _setVolumeId(self, value):
    435 """ 436 Property target used to set the volume id. 437 The value must be a non-empty string if it is not C{None}. 438 @raise ValueError: If the value is an empty string. 439 """ 440 if value is not None: 441 if len(value) < 1: 442 raise ValueError("The volume id must be a non-empty string.") 443 self._volumeId = value
    444
    445 - def _getVolumeId(self):
    446 """ 447 Property target used to get the volume id. 448 """ 449 return self._volumeId
    450 451 device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") 452 boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") 453 graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") 454 useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") 455 applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") 456 biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") 457 publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") 458 preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") 459 volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") 460 461 462 ######################### 463 # General public methods 464 ######################### 465
    466 - def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False):
    467 """ 468 Adds an individual file or directory into the ISO image. 469 470 The path must exist and must be a file or a directory. By default, the 471 entry will be placed into the image at the root directory, but this 472 behavior can be overridden using the C{graftPoint} parameter or instance 473 variable. 474 475 You can use the C{contentsOnly} behavior to revert to the "original" 476 C{mkisofs} behavior for adding directories, which is to add only the 477 items within the directory, and not the directory itself. 478 479 @note: Things get I{odd} if you try to add a directory to an image that 480 will be written to a multisession disc, and the same directory already 481 exists in an earlier session on that disc. Not all of the data gets 482 written. You really wouldn't want to do this anyway, I guess. 483 484 @note: An exception will be thrown if the path has already been added to 485 the image, unless the C{override} parameter is set to C{True}. 486 487 @note: The method C{graftPoints} parameter overrides the object-wide 488 instance variable. If neither the method parameter or object-wide value 489 is set, the path will be written at the image root. The graft point 490 behavior is determined by the value which is in effect I{at the time this 491 method is called}, so you I{must} set the object-wide value before 492 calling this method for the first time, or your image may not be 493 consistent. 494 495 @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" 496 an object-wide instance variable by setting it to C{None}. Python's 497 default argument functionality buys us a lot, but it can't make this 498 method psychic. :) 499 500 @param path: File or directory to be added to the image 501 @type path: String representing a path on disk 502 503 @param graftPoint: Graft point to be used when adding this entry 504 @type graftPoint: String representing a graft point path, as described above 505 506 @param override: Override an existing entry with the same path. 507 @type override: Boolean true/false 508 509 @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). 510 @type contentsOnly: Boolean true/false 511 512 @raise ValueError: If path is not a file or directory, or does not exist. 513 @raise ValueError: If the path has already been added, and override is not set. 514 @raise ValueError: If a path cannot be encoded properly. 515 """ 516 path = encodePath(path) 517 if not override: 518 if path in self.entries.keys(): 519 raise ValueError("Path has already been added to the image.") 520 if os.path.islink(path): 521 raise ValueError("Path must not be a link.") 522 if os.path.isdir(path): 523 if graftPoint is not None: 524 if contentsOnly: 525 self.entries[path] = graftPoint 526 else: 527 self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) 528 elif self.graftPoint is not None: 529 if contentsOnly: 530 self.entries[path] = self.graftPoint 531 else: 532 self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) 533 else: 534 if contentsOnly: 535 self.entries[path] = None 536 else: 537 self.entries[path] = os.path.basename(path) 538 elif os.path.isfile(path): 539 if graftPoint is not None: 540 self.entries[path] = graftPoint 541 elif self.graftPoint is not None: 542 self.entries[path] = self.graftPoint 543 else: 544 self.entries[path] = None 545 else: 546 raise ValueError("Path must be a file or a directory.")
    547
    548 - def getEstimatedSize(self):
    549 """ 550 Returns the estimated size (in bytes) of the ISO image. 551 552 This is implemented via the C{-print-size} option to C{mkisofs}, so it 553 might take a bit of time to execute. However, the result is as accurate 554 as we can get, since it takes into account all of the ISO overhead, the 555 true cost of directories in the structure, etc, etc. 556 557 @return: Estimated size of the image, in bytes. 558 559 @raise IOError: If there is a problem calling C{mkisofs}. 560 @raise ValueError: If there are no filesystem entries in the image 561 """ 562 if len(self.entries.keys()) == 0: 563 raise ValueError("Image does not contain any entries.") 564 return self._getEstimatedSize(self.entries)
    565
    566 - def _getEstimatedSize(self, entries):
    567 """ 568 Returns the estimated size (in bytes) for the passed-in entries dictionary. 569 @return: Estimated size of the image, in bytes. 570 @raise IOError: If there is a problem calling C{mkisofs}. 571 """ 572 args = self._buildSizeArgs(entries) 573 command = resolveCommand(MKISOFS_COMMAND) 574 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 575 if result != 0: 576 raise IOError("Error (%d) executing mkisofs command to estimate size." % result) 577 if len(output) != 1: 578 raise IOError("Unable to parse mkisofs output.") 579 try: 580 sectors = float(output[0]) 581 size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) 582 return size 583 except: 584 raise IOError("Unable to parse mkisofs output.")
    585
    586 - def writeImage(self, imagePath):
    587 """ 588 Writes this image to disk using the image path. 589 590 @param imagePath: Path to write image out as 591 @type imagePath: String representing a path on disk 592 593 @raise IOError: If there is an error writing the image to disk. 594 @raise ValueError: If there are no filesystem entries in the image 595 @raise ValueError: If a path cannot be encoded properly. 596 """ 597 imagePath = encodePath(imagePath) 598 if len(self.entries.keys()) == 0: 599 raise ValueError("Image does not contain any entries.") 600 args = self._buildWriteArgs(self.entries, imagePath) 601 command = resolveCommand(MKISOFS_COMMAND) 602 (result, output) = executeCommand(command, args, returnOutput=False) 603 if result != 0: 604 raise IOError("Error (%d) executing mkisofs command to build image." % result)
    605 606 607 ######################################### 608 # Methods used to build mkisofs commands 609 ######################################### 610 611 @staticmethod
    612 - def _buildDirEntries(entries):
    613 """ 614 Uses an entries dictionary to build a list of directory locations for use 615 by C{mkisofs}. 616 617 We build a list of entries that can be passed to C{mkisofs}. Each entry is 618 either raw (if no graft point was configured) or in graft-point form as 619 described above (if a graft point was configured). The dictionary keys 620 are the path names, and the values are the graft points, if any. 621 622 @param entries: Dictionary of image entries (i.e. self.entries) 623 624 @return: List of directory locations for use by C{mkisofs} 625 """ 626 dirEntries = [] 627 for key in entries.keys(): 628 if entries[key] is None: 629 dirEntries.append(key) 630 else: 631 dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) 632 return dirEntries
    633
    634 - def _buildGeneralArgs(self):
    635 """ 636 Builds a list of general arguments to be passed to a C{mkisofs} command. 637 638 The various instance variables (C{applicationId}, etc.) are filled into 639 the list of arguments if they are set. 640 By default, we will build a RockRidge disc. If you decide to change 641 this, think hard about whether you know what you're doing. This option 642 is not well-tested. 643 644 @return: List suitable for passing to L{util.executeCommand} as C{args}. 645 """ 646 args = [] 647 if self.applicationId is not None: 648 args.append("-A") 649 args.append(self.applicationId) 650 if self.biblioFile is not None: 651 args.append("-biblio") 652 args.append(self.biblioFile) 653 if self.publisherId is not None: 654 args.append("-publisher") 655 args.append(self.publisherId) 656 if self.preparerId is not None: 657 args.append("-p") 658 args.append(self.preparerId) 659 if self.volumeId is not None: 660 args.append("-V") 661 args.append(self.volumeId) 662 return args
    663
    664 - def _buildSizeArgs(self, entries):
    665 """ 666 Builds a list of arguments to be passed to a C{mkisofs} command. 667 668 The various instance variables (C{applicationId}, etc.) are filled into 669 the list of arguments if they are set. The command will be built to just 670 return size output (a simple count of sectors via the C{-print-size} option), 671 rather than an image file on disk. 672 673 By default, we will build a RockRidge disc. If you decide to change 674 this, think hard about whether you know what you're doing. This option 675 is not well-tested. 676 677 @param entries: Dictionary of image entries (i.e. self.entries) 678 679 @return: List suitable for passing to L{util.executeCommand} as C{args}. 680 """ 681 args = self._buildGeneralArgs() 682 args.append("-print-size") 683 args.append("-graft-points") 684 if self.useRockRidge: 685 args.append("-r") 686 if self.device is not None and self.boundaries is not None: 687 args.append("-C") 688 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 689 args.append("-M") 690 args.append(self.device) 691 args.extend(self._buildDirEntries(entries)) 692 return args
    693
    694 - def _buildWriteArgs(self, entries, imagePath):
    695 """ 696 Builds a list of arguments to be passed to a C{mkisofs} command. 697 698 The various instance variables (C{applicationId}, etc.) are filled into 699 the list of arguments if they are set. The command will be built to write 700 an image to disk. 701 702 By default, we will build a RockRidge disc. If you decide to change 703 this, think hard about whether you know what you're doing. This option 704 is not well-tested. 705 706 @param entries: Dictionary of image entries (i.e. self.entries) 707 708 @param imagePath: Path to write image out as 709 @type imagePath: String representing a path on disk 710 711 @return: List suitable for passing to L{util.executeCommand} as C{args}. 712 """ 713 args = self._buildGeneralArgs() 714 args.append("-graft-points") 715 if self.useRockRidge: 716 args.append("-r") 717 args.append("-o") 718 args.append(imagePath) 719 if self.device is not None and self.boundaries is not None: 720 args.append("-C") 721 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 722 args.append("-M") 723 args.append(self.device) 724 args.extend(self._buildDirEntries(entries)) 725 return args
    726

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.image-module.html0000664000175000017500000001317312143054362025737 0ustar pronovicpronovic00000000000000 CedarBackup2.image
    Package CedarBackup2 :: Module image
    [hide private]
    [frames] | no frames]

    Module image

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.filesystem.SpanItem-class.html0000664000175000017500000002265712143054363030410 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.SpanItem
    Package CedarBackup2 :: Module filesystem :: Class SpanItem
    [hide private]
    [frames] | no frames]

    Class SpanItem

    source code

    object --+
             |
            SpanItem
    

    Item returned by BackupFileList.generateSpan.

    Instance Methods [hide private]
     
    __init__(self, fileList, size, capacity, utilization)
    Create object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, fileList, size, capacity, utilization)
    (Constructor)

    source code 

    Create object.

    Parameters:
    • fileList - List of files
    • size - Size (in bytes) of files
    • utilization - Utilization, as a percentage (0-100)
    Overrides: object.__init__

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.initialize-pysrc.html0000664000175000017500000006447412143054364030344 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.initialize
    Package CedarBackup2 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.initialize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python (>= 2.5) 
    29  # Project  : Cedar Backup, release 2 
    30  # Revision : $Id: initialize.py 1006 2010-07-07 21:03:57Z pronovic $ 
    31  # Purpose  : Implements the standard 'initialize' action. 
    32  # 
    33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    34   
    35  ######################################################################## 
    36  # Module documentation 
    37  ######################################################################## 
    38   
    39  """ 
    40  Implements the standard 'initialize' action. 
    41  @sort: executeInitialize 
    42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    43  """ 
    44   
    45   
    46  ######################################################################## 
    47  # Imported modules 
    48  ######################################################################## 
    49   
    50  # System modules 
    51  import logging 
    52   
    53  # Cedar Backup modules 
    54  from CedarBackup2.actions.util import initializeMediaState 
    55   
    56   
    57  ######################################################################## 
    58  # Module-wide constants and variables 
    59  ######################################################################## 
    60   
    61  logger = logging.getLogger("CedarBackup2.log.actions.initialize") 
    62   
    63   
    64  ######################################################################## 
    65  # Public functions 
    66  ######################################################################## 
    67   
    68  ############################### 
    69  # executeInitialize() function 
    70  ############################### 
    71   
    
    72 -def executeInitialize(configPath, options, config):
    73 """ 74 Executes the initialize action. 75 76 The initialize action initializes the media currently in the writer 77 device so that Cedar Backup can recognize it later. This is an optional 78 step; it's only required if checkMedia is set on the store configuration. 79 80 @param configPath: Path to configuration file on disk. 81 @type configPath: String representing a path on disk. 82 83 @param options: Program command-line options. 84 @type options: Options object. 85 86 @param config: Program configuration. 87 @type config: Config object. 88 """ 89 logger.debug("Executing the 'initialize' action.") 90 if config.options is None or config.store is None: 91 raise ValueError("Store configuration is not properly filled in.") 92 initializeMediaState(config) 93 logger.info("Executed the 'initialize' action successfully.")
    94

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.config-module.html0000664000175000017500000001273012143054362026703 0ustar pronovicpronovic00000000000000 config

    Module config


    Classes

    ActionDependencies
    ActionHook
    BlankBehavior
    ByteQuantity
    CollectConfig
    CollectDir
    CollectFile
    CommandOverride
    Config
    ExtendedAction
    ExtensionsConfig
    LocalPeer
    OptionsConfig
    PeersConfig
    PostActionHook
    PreActionHook
    PurgeConfig
    PurgeDir
    ReferenceConfig
    RemotePeer
    StageConfig
    StoreConfig

    Functions

    addByteQuantityNode
    readByteQuantity

    Variables

    ACTION_NAME_REGEX
    DEFAULT_DEVICE_TYPE
    DEFAULT_MEDIA_TYPE
    REWRITABLE_MEDIA_TYPES
    VALID_ARCHIVE_MODES
    VALID_BLANK_MODES
    VALID_BYTE_UNITS
    VALID_CD_MEDIA_TYPES
    VALID_COLLECT_MODES
    VALID_COMPRESS_MODES
    VALID_DEVICE_TYPES
    VALID_DVD_MEDIA_TYPES
    VALID_FAILURE_MODES
    VALID_MEDIA_TYPES
    VALID_ORDER_MODES
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.xmlutil-module.html0000664000175000017500000000732012143054362027133 0ustar pronovicpronovic00000000000000 xmlutil

    Module xmlutil


    Classes

    Serializer

    Functions

    addBooleanNode
    addContainerNode
    addIntegerNode
    addStringNode
    createInputDom
    createOutputDom
    isElement
    readBoolean
    readChildren
    readFirstChild
    readFloat
    readInteger
    readString
    readStringList
    serializeDom

    Variables

    FALSE_BOOLEAN_VALUES
    TRUE_BOOLEAN_VALUES
    VALID_BOOLEAN_VALUES
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.util-pysrc.html0000664000175000017500000041566412143054364027161 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.util
    Package CedarBackup2 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: util.py 1041 2013-05-10 02:05:13Z pronovic $ 
     31  # Purpose  : Implements action-related utilities 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements action-related utilities 
     41  @sort: findDailyDirs 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import os 
     52  import time 
     53  import tempfile 
     54  import logging 
     55   
     56  # Cedar Backup modules 
     57  from CedarBackup2.filesystem import FilesystemList 
     58  from CedarBackup2.util import changeOwnership 
     59  from CedarBackup2.util import deviceMounted 
     60  from CedarBackup2.writers.util import readMediaLabel 
     61  from CedarBackup2.writers.cdwriter import CdWriter 
     62  from CedarBackup2.writers.dvdwriter import DvdWriter 
     63  from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 
     64  from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     65  from CedarBackup2.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES 
     66  from CedarBackup2.actions.constants import INDICATOR_PATTERN 
     67   
     68   
     69  ######################################################################## 
     70  # Module-wide constants and variables 
     71  ######################################################################## 
     72   
     73  logger = logging.getLogger("CedarBackup2.log.actions.util") 
     74  MEDIA_LABEL_PREFIX   = "CEDAR BACKUP" 
     75   
     76   
     77  ######################################################################## 
     78  # Public utility functions 
     79  ######################################################################## 
     80   
     81  ########################### 
     82  # findDailyDirs() function 
     83  ########################### 
     84   
    
    85 -def findDailyDirs(stagingDir, indicatorFile):
    86 """ 87 Returns a list of all daily staging directories that do not contain 88 the indicated indicator file. 89 90 @param stagingDir: Configured staging directory (config.targetDir) 91 92 @return: List of absolute paths to daily staging directories. 93 """ 94 results = FilesystemList() 95 yearDirs = FilesystemList() 96 yearDirs.excludeFiles = True 97 yearDirs.excludeLinks = True 98 yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) 99 for yearDir in yearDirs: 100 monthDirs = FilesystemList() 101 monthDirs.excludeFiles = True 102 monthDirs.excludeLinks = True 103 monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) 104 for monthDir in monthDirs: 105 dailyDirs = FilesystemList() 106 dailyDirs.excludeFiles = True 107 dailyDirs.excludeLinks = True 108 dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) 109 for dailyDir in dailyDirs: 110 if os.path.exists(os.path.join(dailyDir, indicatorFile)): 111 logger.debug("Skipping directory [%s]; contains %s." % (dailyDir, indicatorFile)) 112 else: 113 logger.debug("Adding [%s] to list of daily directories." % dailyDir) 114 results.append(dailyDir) # just put it in the list, no fancy operations 115 return results
    116 117 118 ########################### 119 # createWriter() function 120 ########################### 121
    122 -def createWriter(config):
    123 """ 124 Creates a writer object based on current configuration. 125 126 This function creates and returns a writer based on configuration. This is 127 done to abstract action functionality from knowing what kind of writer is in 128 use. Since all writers implement the same interface, there's no need for 129 actions to care which one they're working with. 130 131 Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An 132 exception will be raised if any other device type is used. 133 134 This function also checks to make sure that the device isn't mounted before 135 creating a writer object for it. Experience shows that sometimes if the 136 device is mounted, we have problems with the backup. We may as well do the 137 check here first, before instantiating the writer. 138 139 @param config: Config object. 140 141 @return: Writer that can be used to write a directory to some media. 142 143 @raise ValueError: If there is a problem getting the writer. 144 @raise IOError: If there is a problem creating the writer object. 145 """ 146 devicePath = config.store.devicePath 147 deviceScsiId = config.store.deviceScsiId 148 driveSpeed = config.store.driveSpeed 149 noEject = config.store.noEject 150 refreshMediaDelay = config.store.refreshMediaDelay 151 ejectDelay = config.store.ejectDelay 152 deviceType = _getDeviceType(config) 153 mediaType = _getMediaType(config) 154 if deviceMounted(devicePath): 155 raise IOError("Device [%s] is currently mounted." % (devicePath)) 156 if deviceType == "cdwriter": 157 return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 158 elif deviceType == "dvdwriter": 159 return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 160 else: 161 raise ValueError("Device type [%s] is invalid." % deviceType)
    162 163 164 ################################ 165 # writeIndicatorFile() function 166 ################################ 167
    168 -def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup):
    169 """ 170 Writes an indicator file into a target directory. 171 @param targetDir: Target directory in which to write indicator 172 @param indicatorFile: Name of the indicator file 173 @param backupUser: User that indicator file should be owned by 174 @param backupGroup: Group that indicator file should be owned by 175 @raise IOException: If there is a problem writing the indicator file 176 """ 177 filename = os.path.join(targetDir, indicatorFile) 178 logger.debug("Writing indicator file [%s]." % filename) 179 try: 180 open(filename, "w").write("") 181 changeOwnership(filename, backupUser, backupGroup) 182 except Exception, e: 183 logger.error("Error writing [%s]: %s" % (filename, e)) 184 raise e
    185 186 187 ############################ 188 # getBackupFiles() function 189 ############################ 190
    191 -def getBackupFiles(targetDir):
    192 """ 193 Gets a list of backup files in a target directory. 194 195 Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, 196 etc.) are assumed to be indicator files and are ignored. 197 198 @param targetDir: Directory to look in 199 200 @return: List of backup files in the directory 201 202 @raise ValueError: If the target directory does not exist 203 """ 204 if not os.path.isdir(targetDir): 205 raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) 206 fileList = FilesystemList() 207 fileList.excludeDirs = True 208 fileList.excludeLinks = True 209 fileList.excludeBasenamePatterns = INDICATOR_PATTERN 210 fileList.addDirContents(targetDir) 211 return fileList
    212 213 214 #################### 215 # checkMediaState() 216 #################### 217
    218 -def checkMediaState(storeConfig):
    219 """ 220 Checks state of the media in the backup device to confirm whether it has 221 been initialized for use with Cedar Backup. 222 223 We can tell whether the media has been initialized by looking at its media 224 label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been 225 initialized. 226 227 The check varies depending on whether the media is rewritable or not. For 228 non-rewritable media, we also accept a C{None} media label, since this kind 229 of media cannot safely be initialized. 230 231 @param storeConfig: Store configuration 232 233 @raise ValueError: If media is not initialized. 234 """ 235 mediaLabel = readMediaLabel(storeConfig.devicePath) 236 if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: 237 if mediaLabel is None: 238 raise ValueError("Media has not been initialized: no media label available") 239 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 240 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) 241 else: 242 if mediaLabel is None: 243 logger.info("Media has no media label; assuming OK since media is not rewritable.") 244 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 245 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel)
    246 247 248 ######################### 249 # initializeMediaState() 250 ######################### 251
    252 -def initializeMediaState(config):
    253 """ 254 Initializes state of the media in the backup device so Cedar Backup can 255 recognize it. 256 257 This is done by writing an mostly-empty image (it contains a "Cedar Backup" 258 directory) to the media with a known media label. 259 260 @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It 261 doesn't make any sense to initialize media that cannot be rewritten (CD-R, 262 DVD+R), since Cedar Backup would then not be able to use that media for a 263 backup. 264 265 @param config: Cedar Backup configuration 266 267 @raise ValueError: If media could not be initialized. 268 @raise ValueError: If the configured media type is not rewritable 269 """ 270 if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: 271 raise ValueError("Only rewritable media types can be initialized.") 272 mediaLabel = buildMediaLabel() 273 writer = createWriter(config) 274 writer.refreshMedia() 275 writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc 276 tempdir = tempfile.mkdtemp(dir=config.options.workingDir) 277 try: 278 writer.addImageEntry(tempdir, "CedarBackup") 279 writer.writeImage() 280 finally: 281 if os.path.exists(tempdir): 282 try: 283 os.rmdir(tempdir) 284 except: pass
    285 286 287 #################### 288 # buildMediaLabel() 289 #################### 290
    291 -def buildMediaLabel():
    292 """ 293 Builds a media label to be used on Cedar Backup media. 294 @return: Media label as a string. 295 """ 296 currentDate = time.strftime("%d-%b-%Y").upper() 297 return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate)
    298 299 300 ######################################################################## 301 # Private attribute "getter" functions 302 ######################################################################## 303 304 ############################ 305 # _getDeviceType() function 306 ############################ 307
    308 -def _getDeviceType(config):
    309 """ 310 Gets the device type that should be used for storing. 311 312 Use the configured device type if not C{None}, otherwise use 313 L{config.DEFAULT_DEVICE_TYPE}. 314 315 @param config: Config object. 316 @return: Device type to be used. 317 """ 318 if config.store.deviceType is None: 319 deviceType = DEFAULT_DEVICE_TYPE 320 else: 321 deviceType = config.store.deviceType 322 logger.debug("Device type is [%s]" % deviceType) 323 return deviceType
    324 325 326 ########################### 327 # _getMediaType() function 328 ########################### 329
    330 -def _getMediaType(config):
    331 """ 332 Gets the media type that should be used for storing. 333 334 Use the configured media type if not C{None}, otherwise use 335 C{DEFAULT_MEDIA_TYPE}. 336 337 Once we figure out what configuration value to use, we return a media type 338 value that is valid in one of the supported writers:: 339 340 MEDIA_CDR_74 341 MEDIA_CDRW_74 342 MEDIA_CDR_80 343 MEDIA_CDRW_80 344 MEDIA_DVDPLUSR 345 MEDIA_DVDPLUSRW 346 347 @param config: Config object. 348 349 @return: Media type to be used as a writer media type value. 350 @raise ValueError: If the media type is not valid. 351 """ 352 if config.store.mediaType is None: 353 mediaType = DEFAULT_MEDIA_TYPE 354 else: 355 mediaType = config.store.mediaType 356 if mediaType == "cdr-74": 357 logger.debug("Media type is MEDIA_CDR_74.") 358 return MEDIA_CDR_74 359 elif mediaType == "cdrw-74": 360 logger.debug("Media type is MEDIA_CDRW_74.") 361 return MEDIA_CDRW_74 362 elif mediaType == "cdr-80": 363 logger.debug("Media type is MEDIA_CDR_80.") 364 return MEDIA_CDR_80 365 elif mediaType == "cdrw-80": 366 logger.debug("Media type is MEDIA_CDRW_80.") 367 return MEDIA_CDRW_80 368 elif mediaType == "dvd+r": 369 logger.debug("Media type is MEDIA_DVDPLUSR.") 370 return MEDIA_DVDPLUSR 371 elif mediaType == "dvd+rw": 372 logger.debug("Media type is MEDIA_DVDPLUSRW.") 373 return MEDIA_DVDPLUSRW 374 else: 375 raise ValueError("Media type [%s] is not valid." % mediaType)
    376

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.writers.util-module.html0000664000175000017500000000403712143054362030112 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    IsoImage

    Functions

    readMediaLabel
    validateDevice
    validateDriveSpeed
    validateScsiId

    Variables

    MKISOFS_COMMAND
    VOLNAME_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.sysinfo-pysrc.html0000664000175000017500000023412112143054365027511 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.sysinfo
    Package CedarBackup2 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.sysinfo

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Revision : $Id: sysinfo.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Provides an extension to save off important system recovery information. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Provides an extension to save off important system recovery information. 
     41   
     42  This is a simple Cedar Backup extension used to save off important system 
     43  recovery information.  It saves off three types of information: 
     44   
     45     - Currently-installed Debian packages via C{dpkg --get-selections} 
     46     - Disk partition information via C{fdisk -l} 
     47     - System-wide mounted filesystem contents, via C{ls -laR} 
     48   
     49  The saved-off information is placed into the collect directory and is 
     50  compressed using C{bzip2} to save space. 
     51   
     52  This extension relies on the options and collect configurations in the standard 
     53  Cedar Backup configuration file, but requires no new configuration of its own. 
     54  No public functions other than the action are exposed since all of this is 
     55  pretty simple. 
     56   
     57  @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal 
     58  locations or executed by the current user, those steps will be skipped and a 
     59  note will be logged at the INFO level. 
     60   
     61  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     62  """ 
     63   
     64  ######################################################################## 
     65  # Imported modules 
     66  ######################################################################## 
     67   
     68  # System modules 
     69  import os 
     70  import logging 
     71  from bz2 import BZ2File 
     72   
     73  # Cedar Backup modules 
     74  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     75   
     76   
     77  ######################################################################## 
     78  # Module-wide constants and variables 
     79  ######################################################################## 
     80   
     81  logger = logging.getLogger("CedarBackup2.log.extend.sysinfo") 
     82   
     83  DPKG_PATH      = "/usr/bin/dpkg" 
     84  FDISK_PATH     = "/sbin/fdisk" 
     85   
     86  DPKG_COMMAND   = [ DPKG_PATH, "--get-selections", ] 
     87  FDISK_COMMAND  = [ FDISK_PATH, "-l", ] 
     88  LS_COMMAND     = [ "ls", "-laR", "/", ] 
     89   
     90   
     91  ######################################################################## 
     92  # Public functions 
     93  ######################################################################## 
     94   
     95  ########################### 
     96  # executeAction() function 
     97  ########################### 
     98   
    
    99 -def executeAction(configPath, options, config):
    100 """ 101 Executes the sysinfo backup action. 102 103 @param configPath: Path to configuration file on disk. 104 @type configPath: String representing a path on disk. 105 106 @param options: Program command-line options. 107 @type options: Options object. 108 109 @param config: Program configuration. 110 @type config: Config object. 111 112 @raise ValueError: Under many generic error conditions 113 @raise IOError: If the backup process fails for some reason. 114 """ 115 logger.debug("Executing sysinfo extended action.") 116 if config.options is None or config.collect is None: 117 raise ValueError("Cedar Backup configuration is not properly filled in.") 118 _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 119 _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 120 _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 121 logger.info("Executed the sysinfo extended action successfully.")
    122
    123 -def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True):
    124 """ 125 Dumps a list of currently installed Debian packages via C{dpkg}. 126 @param targetDir: Directory to write output file into. 127 @param backupUser: User which should own the resulting file. 128 @param backupGroup: Group which should own the resulting file. 129 @param compress: Indicates whether to compress the output file. 130 @raise IOError: If the dump fails for some reason. 131 """ 132 if not os.path.exists(DPKG_PATH): 133 logger.info("Not executing Debian package dump since %s doesn't seem to exist." % DPKG_PATH) 134 elif not os.access(DPKG_PATH, os.X_OK): 135 logger.info("Not executing Debian package dump since %s cannot be executed." % DPKG_PATH) 136 else: 137 (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) 138 try: 139 command = resolveCommand(DPKG_COMMAND) 140 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 141 if result != 0: 142 raise IOError("Error [%d] executing Debian package dump." % result) 143 finally: 144 outputFile.close() 145 if not os.path.exists(filename): 146 raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) 147 changeOwnership(filename, backupUser, backupGroup)
    148
    149 -def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True):
    150 """ 151 Dumps information about the partition table via C{fdisk}. 152 @param targetDir: Directory to write output file into. 153 @param backupUser: User which should own the resulting file. 154 @param backupGroup: Group which should own the resulting file. 155 @param compress: Indicates whether to compress the output file. 156 @raise IOError: If the dump fails for some reason. 157 """ 158 if not os.path.exists(FDISK_PATH): 159 logger.info("Not executing partition table dump since %s doesn't seem to exist." % FDISK_PATH) 160 elif not os.access(FDISK_PATH, os.X_OK): 161 logger.info("Not executing partition table dump since %s cannot be executed." % FDISK_PATH) 162 else: 163 (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) 164 try: 165 command = resolveCommand(FDISK_COMMAND) 166 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] 167 if result != 0: 168 raise IOError("Error [%d] executing partition table dump." % result) 169 finally: 170 outputFile.close() 171 if not os.path.exists(filename): 172 raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) 173 changeOwnership(filename, backupUser, backupGroup)
    174
    175 -def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True):
    176 """ 177 Dumps complete listing of filesystem contents via C{ls -laR}. 178 @param targetDir: Directory to write output file into. 179 @param backupUser: User which should own the resulting file. 180 @param backupGroup: Group which should own the resulting file. 181 @param compress: Indicates whether to compress the output file. 182 @raise IOError: If the dump fails for some reason. 183 """ 184 (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) 185 try: 186 # Note: can't count on return status from 'ls', so we don't check it. 187 command = resolveCommand(LS_COMMAND) 188 executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) 189 finally: 190 outputFile.close() 191 if not os.path.exists(filename): 192 raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) 193 changeOwnership(filename, backupUser, backupGroup)
    194
    195 -def _getOutputFile(targetDir, name, compress=True):
    196 """ 197 Opens the output file used for saving a dump to the filesystem. 198 199 The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is 200 C{True}), written in the target directory. 201 202 @param targetDir: Target directory to write file in. 203 @param name: Name of the file to create. 204 @param compress: Indicates whether to write compressed output. 205 206 @return: Tuple of (Output file object, filename) 207 """ 208 filename = os.path.join(targetDir, "%s.txt" % name) 209 if compress: 210 filename = "%s.bz2" % filename 211 logger.debug("Dump file will be [%s]." % filename) 212 if compress: 213 outputFile = BZ2File(filename, "w") 214 else: 215 outputFile = open(filename, "w") 216 return (outputFile, filename)
    217

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions-pysrc.html0000664000175000017500000002603512143054366026175 0ustar pronovicpronovic00000000000000 CedarBackup2.actions
    Package CedarBackup2 :: Package actions
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.actions

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Revision : $Id: __init__.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides package initialization 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Cedar Backup actions. 
    25   
    26  This package code related to the offical Cedar Backup actions (collect, 
    27  stage, store, purge, rebuild, and validate). 
    28   
    29  The action modules consist of mostly "glue" code that uses other lower-level 
    30  functionality to actually implement a backup.  There is one module for each 
    31  high-level backup action, plus a module that provides shared constants. 
    32   
    33  All of the public action function implement the Cedar Backup Extension 
    34  Architecture Interface, i.e. the same interface that extensions implement. 
    35   
    36  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    37  """ 
    38   
    39   
    40  ######################################################################## 
    41  # Package initialization 
    42  ######################################################################## 
    43   
    44  # Using 'from CedarBackup2.actions import *' will just import the modules listed 
    45  # in the __all__ variable. 
    46   
    47  __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] 
    48   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.image-pysrc.html0000664000175000017500000002474212143054365025621 0ustar pronovicpronovic00000000000000 CedarBackup2.image
    Package CedarBackup2 :: Module image
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.image

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: image.py 1022 2011-10-11 23:27:49Z pronovic $ 
    15  # Purpose  : Provides interface backwards compatibility. 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Provides interface backwards compatibility. 
    25   
    26  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    27  support DVD hardware.  All of the writer functionality was moved to the 
    28  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    29  library interface. 
    30   
    31  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    32  """ 
    33   
    34  ######################################################################## 
    35  # Imported modules 
    36  ######################################################################## 
    37   
    38  from CedarBackup2.writers.util import IsoImage  # pylint: disable=W0611 
    39   
    

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.rebuild-pysrc.html0000664000175000017500000016261312143054364027623 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.rebuild
    Package CedarBackup2 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.rebuild

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python (>= 2.5) 
     29  # Project  : Cedar Backup, release 2 
     30  # Revision : $Id: rebuild.py 1006 2010-07-07 21:03:57Z pronovic $ 
     31  # Purpose  : Implements the standard 'rebuild' action. 
     32  # 
     33  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     34   
     35  ######################################################################## 
     36  # Module documentation 
     37  ######################################################################## 
     38   
     39  """ 
     40  Implements the standard 'rebuild' action. 
     41  @sort: executeRebuild 
     42  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import sys 
     52  import os 
     53  import logging 
     54  import datetime 
     55   
     56  # Cedar Backup modules 
     57  from CedarBackup2.util import deriveDayOfWeek 
     58  from CedarBackup2.actions.util import checkMediaState 
     59  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     60  from CedarBackup2.actions.store import writeImage, writeStoreIndicator, consistencyCheck 
     61   
     62   
     63  ######################################################################## 
     64  # Module-wide constants and variables 
     65  ######################################################################## 
     66   
     67  logger = logging.getLogger("CedarBackup2.log.actions.rebuild") 
     68   
     69   
     70  ######################################################################## 
     71  # Public functions 
     72  ######################################################################## 
     73   
     74  ############################ 
     75  # executeRebuild() function 
     76  ############################ 
     77   
    
    78 -def executeRebuild(configPath, options, config):
    79 """ 80 Executes the rebuild backup action. 81 82 This function exists mainly to recreate a disc that has been "trashed" due 83 to media or hardware problems. Note that the "stage complete" indicator 84 isn't checked for this action. 85 86 Note that the rebuild action and the store action are very similar. The 87 main difference is that while store only stores a single day's staging 88 directory, the rebuild action operates on multiple staging directories. 89 90 @param configPath: Path to configuration file on disk. 91 @type configPath: String representing a path on disk. 92 93 @param options: Program command-line options. 94 @type options: Options object. 95 96 @param config: Program configuration. 97 @type config: Config object. 98 99 @raise ValueError: Under many generic error conditions 100 @raise IOError: If there are problems reading or writing files. 101 """ 102 logger.debug("Executing the 'rebuild' action.") 103 if sys.platform == "darwin": 104 logger.warn("Warning: the rebuild action is not fully supported on Mac OS X.") 105 logger.warn("See the Cedar Backup software manual for further information.") 106 if config.options is None or config.store is None: 107 raise ValueError("Rebuild configuration is not properly filled in.") 108 if config.store.checkMedia: 109 checkMediaState(config.store) # raises exception if media is not initialized 110 stagingDirs = _findRebuildDirs(config) 111 writeImage(config, True, stagingDirs) 112 if config.store.checkData: 113 if sys.platform == "darwin": 114 logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") 115 logger.warn("See the Cedar Backup software manual for further information.") 116 else: 117 logger.debug("Running consistency check of media.") 118 consistencyCheck(config, stagingDirs) 119 writeStoreIndicator(config, stagingDirs) 120 logger.info("Executed the 'rebuild' action successfully.")
    121 122 123 ######################################################################## 124 # Private utility functions 125 ######################################################################## 126 127 ############################## 128 # _findRebuildDirs() function 129 ############################## 130
    131 -def _findRebuildDirs(config):
    132 """ 133 Finds the set of directories to be included in a disc rebuild. 134 135 A the rebuild action is supposed to recreate the "last week's" disc. This 136 won't always be possible if some of the staging directories are missing. 137 However, the general procedure is to look back into the past no further than 138 the previous "starting day of week", and then work forward from there trying 139 to find all of the staging directories between then and now that still exist 140 and have a stage indicator. 141 142 @param config: Config object. 143 144 @return: Correct staging dir, as a dict mapping directory to date suffix. 145 @raise IOError: If we do not find at least one staging directory. 146 """ 147 stagingDirs = {} 148 start = deriveDayOfWeek(config.options.startingDay) 149 today = datetime.date.today() 150 if today.weekday() >= start: 151 days = today.weekday() - start + 1 152 else: 153 days = 7 - (start - today.weekday()) + 1 154 for i in range (0, days): 155 currentDay = today - datetime.timedelta(days=i) 156 dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) 157 stageDir = os.path.join(config.store.sourceDir, dateSuffix) 158 indicator = os.path.join(stageDir, STAGE_INDICATOR) 159 if os.path.isdir(stageDir) and os.path.exists(indicator): 160 logger.info("Rebuild process will include stage directory [%s]" % stageDir) 161 stagingDirs[stageDir] = dateSuffix 162 if len(stagingDirs) == 0: 163 raise IOError("Unable to find any staging directories for rebuild process.") 164 return stagingDirs
    165

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.capacity.LocalConfig-class.html0000664000175000017500000010622212143054363031736 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.LocalConfig
    Package CedarBackup2 :: Package extend :: Module capacity :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <capacity> configuration section as the next child of a parent.
    source code
     
    _setCapacity(self, value)
    Property target used to set the capacity configuration value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseCapacity(parentNode)
    Parses a capacity configuration section.
    source code
     
    _readPercentageQuantity(parent, name)
    Read a percentage quantity value from an XML document.
    source code
     
    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Adds a text node as the next child of a parent, to contain a percentage quantity.
    source code
    Properties [hide private]
      capacity
    Capacity configuration in terms of a CapacityConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <capacity> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setCapacity(self, value)

    source code 

    Property target used to set the capacity configuration value. If not None, the value must be a CapacityConfig object.

    Raises:
    • ValueError - If the value is not a CapacityConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the capacity configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseCapacity(parentNode)
    Static Method

    source code 

    Parses a capacity configuration section.

    We read the following fields:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CapacityConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _readPercentageQuantity(parent, name)
    Static Method

    source code 

    Read a percentage quantity value from an XML document.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Percentage quantity parsed from XML document

    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Static Method

    source code 

    Adds a text node as the next child of a parent, to contain a percentage quantity.

    If the percentageQuantity is None, then no node will be created.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • percentageQuantity - PercentageQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Property Details [hide private]

    capacity

    Capacity configuration in terms of a CapacityConfig object.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity configuration value.
    Set Method:
    _setCapacity(self, value) - Property target used to set the capacity configuration value.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.tools.span.SpanOptions-class.html0000664000175000017500000003713012143054363031051 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span.SpanOptions
    Package CedarBackup2 :: Package tools :: Module span :: Class SpanOptions
    [hide private]
    [frames] | no frames]

    Class SpanOptions

    source code

     object --+    
              |    
    cli.Options --+
                  |
                 SpanOptions
    

    Tool-specific command-line options.

    Most of the cback command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions.

    Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this.

    Instance Methods [hide private]
     
    validate(self)
    Validates command-line options represented by the object.
    source code

    Inherited from cli.Options: __cmp__, __init__, __repr__, __str__, buildArgumentList, buildArgumentString

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from cli.Options: actions, config, debug, diagnostics, full, help, logfile, managed, managedOnly, mode, output, owner, quiet, stacktrace, verbose, version

    Inherited from object: __class__

    Method Details [hide private]

    validate(self)

    source code 

    Validates command-line options represented by the object. There are no validations here, because we don't use any actions.

    Raises:
    • ValueError - If one of the validations fails.
    Overrides: cli.Options.validate

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.stage-module.html0000664000175000017500000000501612143054362030177 0ustar pronovicpronovic00000000000000 stage

    Module stage


    Functions

    executeStage

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.subversion.RepositoryDir-class.html0000664000175000017500000011727712143054363033012 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.RepositoryDir
    Package CedarBackup2 :: Package extend :: Module subversion :: Class RepositoryDir
    [hide private]
    [frames] | no frames]

    Class RepositoryDir

    source code

    object --+
             |
            RepositoryDir
    

    Class representing Subversion repository directory.

    A repository directory is a directory that contains one or more Subversion repositories.

    The following restrictions exist on data in this class:

    • The directory path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the RepositoryDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setDirectoryPath(self, value)
    Property target used to set the directory path.
    source code
     
    _getDirectoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      directoryPath
    Absolute path of the Subversion parent directory.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the RepositoryDir class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • directoryPath - Absolute path of the Subversion parent directory
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setDirectoryPath(self, value)

    source code 

    Property target used to set the directory path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    directoryPath

    Absolute path of the Subversion parent directory.

    Get Method:
    _getDirectoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setDirectoryPath(self, value) - Property target used to set the directory path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.postgresql-module.html0000664000175000017500000005520012143054362030343 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql
    Package CedarBackup2 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Module postgresql

    source code

    Provides an extension to back up PostgreSQL databases.

    This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section <postgresql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate voodoo in the pg_hda.conf file.

    Note that this code always produces a full backup. There is currently no facility for making incremental backups.

    You should always make /etc/cback.conf unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames.

    Use of this extension may expose usernames in the process listing (via ps) when the backup is running if the username is specified in the configuration.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Antoine Beaupre <anarcat@koumbit.org>
    Classes [hide private]
      PostgresqlConfig
    Class representing PostgreSQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the PostgreSQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the PostgreSQL dump.
    source code
     
    backupDatabase(user, backupFile, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.postgresql")
      POSTGRESQLDUMP_COMMAND = ['pg_dump']
      POSTGRESQLDUMPALL_COMMAND = ['pg_dumpall']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the PostgreSQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database.
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the PostgreSQL dump.

    The filename is either "postgresqldump.txt" or "postgresqldump-<database>.txt". The ".gz" or ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename)

    backupDatabase(user, backupFile, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is always a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • user (String representing PostgreSQL username.) - User to use for connecting to the database.
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    Note: Typically, you would use the root user to back up all databases.


    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions-module.html0000664000175000017500000000216212143054362027074 0ustar pronovicpronovic00000000000000 actions

    Module actions


    Variables


    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.extend.encrypt-module.html0000664000175000017500000000506212143054362030410 0ustar pronovicpronovic00000000000000 encrypt

    Module encrypt


    Classes

    EncryptConfig
    LocalConfig

    Functions

    executeAction

    Variables

    ENCRYPT_INDICATOR
    GPG_COMMAND
    VALID_ENCRYPT_MODES
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.testutil-module.html0000664000175000017500000014223312143054362026532 0ustar pronovicpronovic00000000000000 CedarBackup2.testutil
    Package CedarBackup2 :: Module testutil
    [hide private]
    [frames] | no frames]

    Module testutil

    source code

    Provides unit-testing utilities.

    These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree.

    Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like removedir) do what they are supposed to, but I don't want responsibility for making them available to others.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findResources(resources, dataDirs)
    Returns a dictionary of locations for various resources.
    source code
     
    commandAvailable(command)
    Indicates whether a command is available on $PATH somewhere.
    source code
     
    buildPath(components)
    Builds a complete path from a list of components.
    source code
     
    removedir(tree)
    Recursively removes an entire directory.
    source code
     
    extractTar(tmpdir, filepath)
    Extracts the indicated tar file to the indicated tmpdir.
    source code
     
    changeFileAge(filename, subtract=None)
    Changes a file age using the os.utime function.
    source code
     
    getMaskAsMode()
    Returns the user's current umask inverted to a mode.
    source code
     
    getLogin()
    Returns the name of the currently-logged in user.
    source code
     
    failUnlessAssignRaises(testCase, exception, obj, prop, value)
    Equivalent of failUnlessRaises, but used for property assignments instead.
    source code
     
    runningAsRoot()
    Returns boolean indicating whether the effective user id is root.
    source code
     
    platformDebian()
    Returns boolean indicating whether this is the Debian platform.
    source code
     
    platformMacOsX()
    Returns boolean indicating whether this is the Mac OS X platform.
    source code
     
    platformCygwin()
    Returns boolean indicating whether this is the Cygwin platform.
    source code
     
    platformWindows()
    Returns boolean indicating whether this is the Windows platform.
    source code
     
    platformHasEcho()
    Returns boolean indicating whether the platform has a sensible echo command.
    source code
     
    platformSupportsLinks()
    Returns boolean indicating whether the platform supports soft-links.
    source code
     
    platformSupportsPermissions()
    Returns boolean indicating whether the platform supports UNIX-style file permissions.
    source code
     
    platformRequiresBinaryRead()
    Returns boolean indicating whether the platform requires binary reads.
    source code
     
    setupDebugLogger()
    Sets up a screen logger for debugging purposes.
    source code
     
    setupOverrides()
    Set up any platform-specific overrides that might be required.
    source code
     
    randomFilename(length, prefix=None, suffix=None)
    Generates a random filename with the given length.
    source code
     
    captureOutput(c)
    Captures the output (stdout, stderr) of a function or a method.
    source code
     
    _isPlatform(name)
    Returns boolean indicating whether we're running on the indicated platform.
    source code
     
    availableLocales()
    Returns a list of available locales on the system
    source code
     
    hexFloatLiteralAllowed()
    Indicates whether hex float literals are allowed by the interpreter.
    source code
    Variables [hide private]
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    findResources(resources, dataDirs)

    source code 

    Returns a dictionary of locations for various resources.

    Parameters:
    • resources - List of required resources.
    • dataDirs - List of data directories to search within for resources.
    Returns:
    Dictionary mapping resource name to resource path.
    Raises:
    • Exception - If some resource cannot be found.

    commandAvailable(command)

    source code 

    Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms.

    Parameters:
    • command - Commang to search for
    Returns:
    Boolean true/false depending on whether command is available.

    buildPath(components)

    source code 

    Builds a complete path from a list of components. For instance, constructs "/a/b/c" from ["/a", "b", "c",].

    Parameters:
    • components - List of components.
    Returns:
    String path constructed from components.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    removedir(tree)

    source code 

    Recursively removes an entire directory. This is basically taken from an example on python.com.

    Parameters:
    • tree - Directory tree to remove.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    extractTar(tmpdir, filepath)

    source code 

    Extracts the indicated tar file to the indicated tmpdir.

    Parameters:
    • tmpdir - Temp directory to extract to.
    • filepath - Path to tarfile to extract.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    changeFileAge(filename, subtract=None)

    source code 

    Changes a file age using the os.utime function.

    Parameters:
    • filename - File to operate on.
    • subtract - Number of seconds to subtract from the current time.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be at least as old as requested later on.

    getMaskAsMode()

    source code 

    Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775.

    Returns:
    Umask converted to a mode, as an integer.

    getLogin()

    source code 

    Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway.

    failUnlessAssignRaises(testCase, exception, obj, prop, value)

    source code 

    Equivalent of failUnlessRaises, but used for property assignments instead.

    It's nice to be able to use failUnlessRaises to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods.

    This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work.

    Let's assume you make this method call:

      testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath)
    

    If you do this, a test case failure will be raised unless the assignment:

      collectDir.absolutePath = absolutePath
    

    fails with a ValueError exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised.

    Parameters:
    • testCase - PyUnit test case object (i.e. self).
    • exception - Exception that is expected to be raised.
    • obj - Object whose property is to be assigned to.
    • prop - Name of the property, as a string.
    • value - Value that is to be assigned to the property.

    Note: Internally, the missed and instead variables are used rather than directly calling testCase.fail upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general except clause.

    See Also: unittest.TestCase.failUnlessRaises

    runningAsRoot()

    source code 

    Returns boolean indicating whether the effective user id is root. This is always true on platforms that have no concept of root, like Windows.

    platformHasEcho()

    source code 

    Returns boolean indicating whether the platform has a sensible echo command. On some platforms, like Windows, echo doesn't really work for tests.

    platformSupportsLinks()

    source code 

    Returns boolean indicating whether the platform supports soft-links. Some platforms, like Windows, do not support links, and tests need to take this into account.

    platformSupportsPermissions()

    source code 

    Returns boolean indicating whether the platform supports UNIX-style file permissions. Some platforms, like Windows, do not support permissions, and tests need to take this into account.

    platformRequiresBinaryRead()

    source code 

    Returns boolean indicating whether the platform requires binary reads. Some platforms, like Windows, require a special flag to read binary data from files.

    setupDebugLogger()

    source code 

    Sets up a screen logger for debugging purposes.

    Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that.

    setupOverrides()

    source code 

    Set up any platform-specific overrides that might be required.

    When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for.

    Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work.

    randomFilename(length, prefix=None, suffix=None)

    source code 

    Generates a random filename with the given length.

    Parameters:
    • length - Length of filename. @return Random filename.

    captureOutput(c)

    source code 

    Captures the output (stdout, stderr) of a function or a method.

    Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output.

    This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than stdout or stderr.

    Parameters:
    • c - Callable function or method.
    Returns:
    Output of function, as one big string.

    Note: This method assumes that callable doesn't take any arguments besides keyword argument fd to specify the file descriptor.

    _isPlatform(name)

    source code 

    Returns boolean indicating whether we're running on the indicated platform.

    Parameters:
    • name - Platform name to check, currently one of "windows" or "macosx"

    availableLocales()

    source code 

    Returns a list of available locales on the system

    Returns:
    List of string locale names

    hexFloatLiteralAllowed()

    source code 

    Indicates whether hex float literals are allowed by the interpreter.

    As far back as 2004, some Python documentation indicated that octal and hex notation applied only to integer literals. However, prior to Python 2.5, it was legal to construct a float with an argument like 0xAC on some platforms. This check provides a an indication of whether the current interpreter supports that behavior.

    This check exists so that unit tests can continue to test the same thing as always for pre-2.5 interpreters (i.e. making sure backwards compatibility doesn't break) while still continuing to work for later interpreters.

    The returned value is True if hex float literals are allowed, False otherwise.


    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.customize-module.html0000664000175000017500000000323312143054362027456 0ustar pronovicpronovic00000000000000 customize

    Module customize


    Functions

    customizeOverrides

    Variables

    DEBIAN_CDRECORD
    DEBIAN_MKISOFS
    PLATFORM
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ExtensionsConfig-class.html0000664000175000017500000005616712143054362031240 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ExtensionsConfig
    Package CedarBackup2 :: Module config :: Class ExtensionsConfig
    [hide private]
    [frames] | no frames]

    Class ExtensionsConfig

    source code

    object --+
             |
            ExtensionsConfig
    

    Class representing Cedar Backup extensions configuration.

    Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function.

    The following restrictions exist on data in this class:

    • If set, the order mode must be one of the values in VALID_ORDER_MODES
    • The actions list must be a list of ExtendedAction objects.
    Instance Methods [hide private]
     
    __init__(self, actions=None, orderMode=None)
    Constructor for the ExtensionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setOrderMode(self, value)
    Property target used to set the order mode.
    source code
     
    _getOrderMode(self)
    Property target used to get the order mode.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _getActions(self)
    Property target used to get the actions list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      orderMode
    Order mode for extensions, to control execution ordering.
      actions
    List of extended actions.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions=None, orderMode=None)
    (Constructor)

    source code 

    Constructor for the ExtensionsConfig class.

    Parameters:
    • actions - List of extended actions
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setOrderMode(self, value)

    source code 

    Property target used to set the order mode. The value must be one of VALID_ORDER_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. Either the value must be None or each element must be an ExtendedAction.

    Raises:
    • ValueError - If the value is not a ExtendedAction

    Property Details [hide private]

    orderMode

    Order mode for extensions, to control execution ordering.

    Get Method:
    _getOrderMode(self) - Property target used to get the order mode.
    Set Method:
    _setOrderMode(self, value) - Property target used to set the order mode.

    actions

    List of extended actions.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.cdwriter.CdWriter-class.html0000664000175000017500000032543312143054363031546 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.CdWriter
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class CdWriter
    [hide private]
    [frames] | no frames]

    Class CdWriter

    source code

    object --+
             |
            CdWriter
    

    Class representing a device that knows how to write CD media.

    Summary

    This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc.

    This class is implemented in terms of the eject and cdrecord programs, both of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Media Types

    This class knows how to write to two different kinds of media, represented by the following constants:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be.

    I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard.

    Device Attributes vs. Media Attributes

    A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Talking to Hardware

    This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray.

    Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0.

    When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see ATA:1,0,0 or ATAPI:0,0,0 as a way to address IDE hardware. By late 2006, ATA and ATAPI had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. /dev/cdrw.

    Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the dev= argument.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a CD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    _retrieveProperties(self)
    Retrieves properties for a device from cdrecord.
    source code
     
    retrieveCapacity(self, entireDisc=False, useMulti=True)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    _getBoundaries(self, entireDisc=False, useMulti=True)
    Gets the ISO boundaries for the media.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    _blankMedia(self)
    Blanks the media in the device, if the media is rewritable.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _getDeviceVendor(self)
    Property target used to get the device vendor.
    source code
     
    _getDeviceId(self)
    Property target used to get the device id.
    source code
     
    _getDeviceBufferSize(self)
    Property target used to get the device buffer size.
    source code
     
    _getDeviceSupportsMulti(self)
    Property target used to get the device-support-multi flag.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray.
    source code
     
    _createImage(self)
    Creates an ISO image based on configuration in self._image.
    source code
     
    _writeImage(self, imagePath, writeMulti, newDisc)
    Write an ISO image to disc using cdrecord.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _calculateCapacity(media, boundaries)
    Calculates capacity for the media in terms of boundaries.
    source code
     
    _parsePropertiesOutput(output)
    Parses the output from a cdrecord properties command.
    source code
     
    _parseBoundariesOutput(output)
    Parses the output from a cdrecord capacity command.
    source code
     
    _buildOpenTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildCloseTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildPropertiesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBoundariesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBlankArgs(hardwareId, driveSpeed=None)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildUnlockTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device, in the form [<method>:]scsibus,target,lun.
      hardwareId
    Hardware id for this writer, either SCSI id or device path.
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceType
    Type of the device, as returned from cdrecord -prcap.
      deviceVendor
    Vendor of the device, as returned from cdrecord -prcap.
      deviceId
    Device identification, as returned from cdrecord -prcap.
      deviceBufferSize
    Size of the device's write buffer, in bytes.
      deviceSupportsMulti
    Indicates whether device supports multisession discs.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a CD writer object.

    The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called.

    The various instance variables such as deviceType, deviceVendor, etc. might be None, if we're unable to parse this specific information from the cdrecord output. This information is just for reference.

    The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device.

    If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in noEject=False. This will override the properties and the device will never be ejected.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/cdrw) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Overrides properties to indicate that the device does not support eject.
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    • IOError - If device properties could not be read for some reason.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    _retrieveProperties(self)

    source code 

    Retrieves properties for a device from cdrecord.

    The results are returned as a tuple of the object device attributes as returned from _parsePropertiesOutput: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is a problem talking to the device.

    retrieveCapacity(self, entireDisc=False, useMulti=True)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if useMulti is passed in as False, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be None. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • IOError - If the media could not be read for some reason.

    _getBoundaries(self, entireDisc=False, useMulti=True)

    source code 

    Gets the ISO boundaries for the media.

    If entireDisc is passed in as True the boundaries will be None, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be None. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    Boundaries tuple or None, as described above.
    Raises:
    • IOError - If the media could not be read for some reason.

    _calculateCapacity(media, boundaries)
    Static Method

    source code 

    Calculates capacity for the media in terms of boundaries.

    If boundaries is None or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc.

    Parameters:
    • media - MediaDescription object describing the media capacity.
    • boundaries - Session boundaries as returned from _getBoundaries.
    Returns:
    MediaCapacity object describing the capacity of the media.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable).

    If writeMulti is passed in as True, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs).

    if imagePath is passed in as None, then the existing image configured with initializeImage will be used. Under these circumstances, the passed-in newDisc flag will be ignored.

    By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.)

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the entire disc will overwritten.
    • writeMulti (Boolean true/false) - Indicates whether a multisession disc should be written, if possible.
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    _blankMedia(self)

    source code 

    Blanks the media in the device, if the media is rewritable.

    Raises:
    • IOError - If the media could not be written to for some reason.

    _parsePropertiesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord properties command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildPropertiesArgs. The list of strings will be parsed to yield information about the properties of the device.

    The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag True if a matching line is found. All of the parsing is done with regular expressions.

    Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference.

    The results are returned as a tuple of the object device attributes: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Parameters:
    • output - Output from a cdrecord -prcap command.
    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    _parseBoundariesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord capacity command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildBoundaryArgs. The list of strings will be parsed to yield information about the capacity of the media in the device.

    Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore stderr when executing the cdrecord command that generates output for this method, because sometimes cdrecord spits out kernel warnings about the actual output.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • output - Output from a cdrecord -msinfo command.
    Returns:
    Boundaries tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    Note: If the boundaries output can't be parsed, we return None.

    _buildOpenTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildCloseTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildPropertiesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for a list of its capacities via the -prcap switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBoundariesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for the current multisession boundaries of the media using the -msinfo switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBlankArgs(hardwareId, driveSpeed=None)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to blank the media in the device identified by hardwareId. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • driveSpeed - Speed at which the drive writes.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to write the indicated ISO image (imagePath) to the media in the device identified by hardwareId. The writeMulti argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • imagePath - Path to an ISO image on disk.
    • driveSpeed - Speed at which the drive writes.
    • writeMulti - Indicates whether to write a multisession disc.
    Returns:
    List suitable for passing to util.executeCommand as args.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called

    Note: Before calling this method, you must call initializeImage.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    unlockTray(self)

    source code 

    Unlocks the device's tray.

    Raises:
    • IOError - If there is an error talking to the device.

    _createImage(self)

    source code 

    Creates an ISO image based on configuration in self._image.

    Returns:
    Path to the newly-created ISO image on disk.
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _writeImage(self, imagePath, writeMulti, newDisc)

    source code 

    Write an ISO image to disc using cdrecord. The disc is blanked first if newDisc is True.

    Parameters:
    • imagePath - Path to an ISO image on disk
    • writeMulti - Indicates whether a multisession disc should be written, if possible.
    • newDisc - Indicates whether the entire disc will overwritten.

    _buildUnlockTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to unlock the tray.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device, in the form [<method>:]scsibus,target,lun.

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer, either SCSI id or device path.

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceType

    Type of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.

    deviceVendor

    Vendor of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceVendor(self) - Property target used to get the device vendor.

    deviceId

    Device identification, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceId(self) - Property target used to get the device id.

    deviceBufferSize

    Size of the device's write buffer, in bytes.

    Get Method:
    _getDeviceBufferSize(self) - Property target used to get the device buffer size.

    deviceSupportsMulti

    Indicates whether device supports multisession discs.

    Get Method:
    _getDeviceSupportsMulti(self) - Property target used to get the device-support-multi flag.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ActionDependencies-class.html0000664000175000017500000005713212143054362031470 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ActionDependencies
    Package CedarBackup2 :: Module config :: Class ActionDependencies
    [hide private]
    [frames] | no frames]

    Class ActionDependencies

    source code

    object --+
             |
            ActionDependencies
    

    Class representing dependencies associated with an extended action.

    Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action.

    The following restrictions exist on data in this class:

    • Any action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, beforeList=None, afterList=None)
    Constructor for the ActionDependencies class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setBeforeList(self, value)
    Property target used to set the "run before" list.
    source code
     
    _getBeforeList(self)
    Property target used to get the "run before" list.
    source code
     
    _setAfterList(self, value)
    Property target used to set the "run after" list.
    source code
     
    _getAfterList(self)
    Property target used to get the "run after" list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      beforeList
    List of named actions that this action must be run before.
      afterList
    List of named actions that this action must be run after.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, beforeList=None, afterList=None)
    (Constructor)

    source code 

    Constructor for the ActionDependencies class.

    Parameters:
    • beforeList - List of named actions that this action must be run before
    • afterList - List of named actions that this action must be run after
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBeforeList(self, value)

    source code 

    Property target used to set the "run before" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    _setAfterList(self, value)

    source code 

    Property target used to set the "run after" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    Property Details [hide private]

    beforeList

    List of named actions that this action must be run before.

    Get Method:
    _getBeforeList(self) - Property target used to get the "run before" list.
    Set Method:
    _setBeforeList(self, value) - Property target used to set the "run before" list.

    afterList

    List of named actions that this action must be run after.

    Get Method:
    _getAfterList(self) - Property target used to get the "run after" list.
    Set Method:
    _setAfterList(self, value) - Property target used to set the "run after" list.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.writers.dvdwriter-module.html0000664000175000017500000000431412143054362031145 0ustar pronovicpronovic00000000000000 dvdwriter

    Module dvdwriter


    Classes

    DvdWriter
    MediaCapacity
    MediaDefinition

    Variables

    EJECT_COMMAND
    GROWISOFS_COMMAND
    MEDIA_DVDPLUSR
    MEDIA_DVDPLUSRW
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.postgresql.PostgresqlConfig-class.html0000664000175000017500000007453012143054363033463 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql.PostgresqlConfig
    Package CedarBackup2 :: Package extend :: Module postgresql :: Class PostgresqlConfig
    [hide private]
    [frames] | no frames]

    Class PostgresqlConfig

    source code

    object --+
             |
            PostgresqlConfig
    

    Class representing PostgreSQL configuration.

    The PostgreSQL configuration information is used for backing up PostgreSQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, compressMode=None, all=None, databases=None)
    Constructor for the PostgresqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the PostgresqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.PathResolverSingleton-class.html0000664000175000017500000004551612143054363031761 0ustar pronovicpronovic00000000000000 CedarBackup2.util.PathResolverSingleton
    Package CedarBackup2 :: Module util :: Class PathResolverSingleton
    [hide private]
    [frames] | no frames]

    Class PathResolverSingleton

    source code

    object --+
             |
            PathResolverSingleton
    

    Singleton used for resolving executable paths.

    Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the mkisofs executable, and the Subversion extension needs to find the svnlook executable. Cedar Backup's original behavior was to assume that the simple name ("svnlook" or whatever) was available on the caller's $PATH, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like svnlook in its path.

    One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the executeCommand() function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's $PATH on the application level if I don't have to).

    The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find svnlook in /usr/local/bin/svnlook" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the lookup method to try and resolve the executable they are looking for. Through the lookup method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place.

    Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs.

    The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the fill method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using getInstance and will then just call the lookup method.

    Nested Classes [hide private]
      _Helper
    Helper class to provide a singleton factory method.
    Instance Methods [hide private]
     
    __init__(self)
    Singleton constructor, which just creates the singleton instance.
    source code
     
    lookup(self, name, default=None)
    Looks up name and returns the resolved path associated with the name.
    source code
     
    fill(self, mapping)
    Fills in the singleton's internal mapping from name to resource.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      _instance = None
    Holds a reference to the singleton
      getInstance = _Helper()
    Instance Variables [hide private]
      _mapping
    Internal mapping from resource name to path.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Singleton constructor, which just creates the singleton instance.

    Overrides: object.__init__

    lookup(self, name, default=None)

    source code 

    Looks up name and returns the resolved path associated with the name.

    Parameters:
    • name - Name of the path resource to resolve.
    • default - Default to return if resource cannot be resolved.
    Returns:
    Resolved path associated with name, or default if name can't be resolved.

    fill(self, mapping)

    source code 

    Fills in the singleton's internal mapping from name to resource.

    Parameters:
    • mapping (Dictionary mapping name to path, both as strings.) - Mapping from resource name to path.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.xmlutil.Serializer-class.html0000664000175000017500000006535212143054363030312 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil.Serializer
    Package CedarBackup2 :: Module xmlutil :: Class Serializer
    [hide private]
    [frames] | no frames]

    Class Serializer

    source code

    object --+
             |
            Serializer
    

    XML serializer class.

    This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more.

    This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call visit().

    Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected.


    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    Instance Methods [hide private]
     
    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    Initialize a serializer.
    source code
     
    serialize(self, xmlDom)
    Serialize the passed-in XML document.
    source code
     
    _write(self, text) source code
     
    _tryIndent(self) source code
     
    _visit(self, node) source code
     
    _visitNodeList(self, node, exclude=None) source code
     
    _visitNamedNodeMap(self, node) source code
     
    _visitAttr(self, node) source code
     
    _visitProlog(self) source code
     
    _visitDocument(self, node) source code
     
    _visitDocumentFragment(self, node) source code
     
    _visitElement(self, node) source code
     
    _visitText(self, node) source code
     
    _visitDocumentType(self, doctype) source code
     
    _visitEntity(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitNotation(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitCDATASection(self, node) source code
     
    _visitComment(self, node) source code
     
    _visitEntityReference(self, node) source code
     
    _visitProcessingInstruction(self, node) source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    (Constructor)

    source code 

    Initialize a serializer.

    Parameters:
    • stream - Stream to write output to.
    • encoding - Output encoding.
    • indent - Number of spaces to indent, as an integer
    Overrides: object.__init__

    serialize(self, xmlDom)

    source code 

    Serialize the passed-in XML document.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    Raises:
    • ValueError - If there's an unknown node type in the document.

    _visit(self, node)

    source code 
    Raises:
    • ValueError - If there's an unknown node type in the document.

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.rebuild-module.html0000664000175000017500000000276412143054362030531 0ustar pronovicpronovic00000000000000 rebuild

    Module rebuild


    Functions

    executeRebuild

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.tools-module.html0000664000175000017500000000215412143054362026575 0ustar pronovicpronovic00000000000000 tools

    Module tools


    Variables


    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.initialize-module.html0000664000175000017500000000255512143054362031242 0ustar pronovicpronovic00000000000000 initialize

    Module initialize


    Functions

    executeInitialize

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.extend.postgresql-pysrc.html0000664000175000017500000065117412143054366030236 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql
    Package CedarBackup2 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.postgresql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2006,2010 Kenneth J. Pronovici. 
     12  # Copyright (c) 2006 Antoine Beaupre. 
     13  # All rights reserved. 
     14  # 
     15  # This program is free software; you can redistribute it and/or 
     16  # modify it under the terms of the GNU General Public License, 
     17  # Version 2, as published by the Free Software Foundation. 
     18  # 
     19  # This program is distributed in the hope that it will be useful, 
     20  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     21  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     22  # 
     23  # Copies of the GNU General Public License are available from 
     24  # the Free Software Foundation website, http://www.gnu.org/. 
     25  # 
     26  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     27  # 
     28  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     29  #            Antoine Beaupre <anarcat@koumbit.org> 
     30  # Language : Python (>= 2.5) 
     31  # Project  : Official Cedar Backup Extensions 
     32  # Revision : $Id: postgresql.py 1022 2011-10-11 23:27:49Z pronovic $ 
     33  # Purpose  : Provides an extension to back up PostgreSQL databases. 
     34  # 
     35  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     36  # This file was created with a width of 132 characters, and NO tabs. 
     37   
     38  ######################################################################## 
     39  # Module documentation 
     40  ######################################################################## 
     41   
     42  """ 
     43  Provides an extension to back up PostgreSQL databases. 
     44   
     45  This is a Cedar Backup extension used to back up PostgreSQL databases via the 
     46  Cedar Backup command line.  It requires a new configurations section 
     47  <postgresql> and is intended to be run either immediately before or immediately 
     48  after the standard collect action.  Aside from its own configuration, it 
     49  requires the options and collect configuration sections in the standard Cedar 
     50  Backup configuration file. 
     51   
     52  The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with 
     53  the PostgreSQL product.  Output can be compressed using C{gzip} or C{bzip2}. 
     54  Administrators can configure the extension either to back up all databases or 
     55  to back up only specific databases.  The extension assumes that the current 
     56  user has passwordless access to the database since there is no easy way to pass 
     57  a password to the C{pg_dump} client. This can be accomplished using appropriate 
     58  voodoo in the C{pg_hda.conf} file. 
     59   
     60  Note that this code always produces a full backup.  There is currently no 
     61  facility for making incremental backups. 
     62   
     63  You should always make C{/etc/cback.conf} unreadble to non-root users once you 
     64  place postgresql configuration into it, since postgresql configuration will 
     65  contain information about available PostgreSQL databases and usernames. 
     66   
     67  Use of this extension I{may} expose usernames in the process listing (via 
     68  C{ps}) when the backup is running if the username is specified in the 
     69  configuration. 
     70   
     71  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     72  @author: Antoine Beaupre <anarcat@koumbit.org> 
     73  """ 
     74   
     75  ######################################################################## 
     76  # Imported modules 
     77  ######################################################################## 
     78   
     79  # System modules 
     80  import os 
     81  import logging 
     82  from gzip import GzipFile 
     83  from bz2 import BZ2File 
     84   
     85  # Cedar Backup modules 
     86  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     87  from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     88  from CedarBackup2.config import VALID_COMPRESS_MODES 
     89  from CedarBackup2.util import resolveCommand, executeCommand 
     90  from CedarBackup2.util import ObjectTypeList, changeOwnership 
     91   
     92   
     93  ######################################################################## 
     94  # Module-wide constants and variables 
     95  ######################################################################## 
     96   
     97  logger = logging.getLogger("CedarBackup2.log.extend.postgresql") 
     98  POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] 
     99  POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] 
    
    100 101 102 ######################################################################## 103 # PostgresqlConfig class definition 104 ######################################################################## 105 106 -class PostgresqlConfig(object):
    107 108 """ 109 Class representing PostgreSQL configuration. 110 111 The PostgreSQL configuration information is used for backing up PostgreSQL databases. 112 113 The following restrictions exist on data in this class: 114 115 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 116 - The 'all' flag must be 'Y' if no databases are defined. 117 - The 'all' flag must be 'N' if any databases are defined. 118 - Any values in the databases list must be strings. 119 120 @sort: __init__, __repr__, __str__, __cmp__, user, all, databases 121 """ 122
    123 - def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    124 """ 125 Constructor for the C{PostgresqlConfig} class. 126 127 @param user: User to execute backup as. 128 @param compressMode: Compress mode for backed-up files. 129 @param all: Indicates whether to back up all databases. 130 @param databases: List of databases to back up. 131 """ 132 self._user = None 133 self._compressMode = None 134 self._all = None 135 self._databases = None 136 self.user = user 137 self.compressMode = compressMode 138 self.all = all 139 self.databases = databases
    140
    141 - def __repr__(self):
    142 """ 143 Official string representation for class instance. 144 """ 145 return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases)
    146
    147 - def __str__(self):
    148 """ 149 Informal string representation for class instance. 150 """ 151 return self.__repr__()
    152
    153 - def __cmp__(self, other):
    154 """ 155 Definition of equals operator for this class. 156 @param other: Other object to compare to. 157 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 158 """ 159 if other is None: 160 return 1 161 if self.user != other.user: 162 if self.user < other.user: 163 return -1 164 else: 165 return 1 166 if self.compressMode != other.compressMode: 167 if self.compressMode < other.compressMode: 168 return -1 169 else: 170 return 1 171 if self.all != other.all: 172 if self.all < other.all: 173 return -1 174 else: 175 return 1 176 if self.databases != other.databases: 177 if self.databases < other.databases: 178 return -1 179 else: 180 return 1 181 return 0
    182
    183 - def _setUser(self, value):
    184 """ 185 Property target used to set the user value. 186 """ 187 if value is not None: 188 if len(value) < 1: 189 raise ValueError("User must be non-empty string.") 190 self._user = value
    191
    192 - def _getUser(self):
    193 """ 194 Property target used to get the user value. 195 """ 196 return self._user
    197
    198 - def _setCompressMode(self, value):
    199 """ 200 Property target used to set the compress mode. 201 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 202 @raise ValueError: If the value is not valid. 203 """ 204 if value is not None: 205 if value not in VALID_COMPRESS_MODES: 206 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 207 self._compressMode = value
    208
    209 - def _getCompressMode(self):
    210 """ 211 Property target used to get the compress mode. 212 """ 213 return self._compressMode
    214
    215 - def _setAll(self, value):
    216 """ 217 Property target used to set the 'all' flag. 218 No validations, but we normalize the value to C{True} or C{False}. 219 """ 220 if value: 221 self._all = True 222 else: 223 self._all = False
    224
    225 - def _getAll(self):
    226 """ 227 Property target used to get the 'all' flag. 228 """ 229 return self._all
    230
    231 - def _setDatabases(self, value):
    232 """ 233 Property target used to set the databases list. 234 Either the value must be C{None} or each element must be a string. 235 @raise ValueError: If the value is not a string. 236 """ 237 if value is None: 238 self._databases = None 239 else: 240 for database in value: 241 if len(database) < 1: 242 raise ValueError("Each database must be a non-empty string.") 243 try: 244 saved = self._databases 245 self._databases = ObjectTypeList(basestring, "string") 246 self._databases.extend(value) 247 except Exception, e: 248 self._databases = saved 249 raise e
    250
    251 - def _getDatabases(self):
    252 """ 253 Property target used to get the databases list. 254 """ 255 return self._databases
    256 257 user = property(_getUser, _setUser, None, "User to execute backup as.") 258 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 259 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 260 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 261
    262 263 ######################################################################## 264 # LocalConfig class definition 265 ######################################################################## 266 267 -class LocalConfig(object):
    268 269 """ 270 Class representing this extension's configuration document. 271 272 This is not a general-purpose configuration object like the main Cedar 273 Backup configuration object. Instead, it just knows how to parse and emit 274 PostgreSQL-specific configuration values. Third parties who need to read and 275 write configuration related to this extension should access it through the 276 constructor, C{validate} and C{addConfig} methods. 277 278 @note: Lists within this class are "unordered" for equality comparisons. 279 280 @sort: __init__, __repr__, __str__, __cmp__, postgresql, validate, addConfig 281 """ 282
    283 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    284 """ 285 Initializes a configuration object. 286 287 If you initialize the object without passing either C{xmlData} or 288 C{xmlPath} then configuration will be empty and will be invalid until it 289 is filled in properly. 290 291 No reference to the original XML data or original path is saved off by 292 this class. Once the data has been parsed (successfully or not) this 293 original information is discarded. 294 295 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 296 method will be called (with its default arguments) against configuration 297 after successfully parsing any passed-in XML. Keep in mind that even if 298 C{validate} is C{False}, it might not be possible to parse the passed-in 299 XML document if lower-level validations fail. 300 301 @note: It is strongly suggested that the C{validate} option always be set 302 to C{True} (the default) unless there is a specific need to read in 303 invalid configuration from disk. 304 305 @param xmlData: XML data representing configuration. 306 @type xmlData: String data. 307 308 @param xmlPath: Path to an XML file on disk. 309 @type xmlPath: Absolute path to a file on disk. 310 311 @param validate: Validate the document after parsing it. 312 @type validate: Boolean true/false. 313 314 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 315 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 316 @raise ValueError: If the parsed configuration document is not valid. 317 """ 318 self._postgresql = None 319 self.postgresql = None 320 if xmlData is not None and xmlPath is not None: 321 raise ValueError("Use either xmlData or xmlPath, but not both.") 322 if xmlData is not None: 323 self._parseXmlData(xmlData) 324 if validate: 325 self.validate() 326 elif xmlPath is not None: 327 xmlData = open(xmlPath).read() 328 self._parseXmlData(xmlData) 329 if validate: 330 self.validate()
    331
    332 - def __repr__(self):
    333 """ 334 Official string representation for class instance. 335 """ 336 return "LocalConfig(%s)" % (self.postgresql)
    337
    338 - def __str__(self):
    339 """ 340 Informal string representation for class instance. 341 """ 342 return self.__repr__()
    343
    344 - def __cmp__(self, other):
    345 """ 346 Definition of equals operator for this class. 347 Lists within this class are "unordered" for equality comparisons. 348 @param other: Other object to compare to. 349 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 350 """ 351 if other is None: 352 return 1 353 if self.postgresql != other.postgresql: 354 if self.postgresql < other.postgresql: 355 return -1 356 else: 357 return 1 358 return 0
    359
    360 - def _setPostgresql(self, value):
    361 """ 362 Property target used to set the postgresql configuration value. 363 If not C{None}, the value must be a C{PostgresqlConfig} object. 364 @raise ValueError: If the value is not a C{PostgresqlConfig} 365 """ 366 if value is None: 367 self._postgresql = None 368 else: 369 if not isinstance(value, PostgresqlConfig): 370 raise ValueError("Value must be a C{PostgresqlConfig} object.") 371 self._postgresql = value
    372
    373 - def _getPostgresql(self):
    374 """ 375 Property target used to get the postgresql configuration value. 376 """ 377 return self._postgresql
    378 379 postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") 380
    381 - def validate(self):
    382 """ 383 Validates configuration represented by the object. 384 385 The compress mode must be filled in. Then, if the 'all' flag 386 I{is} set, no databases are allowed, and if the 'all' flag is 387 I{not} set, at least one database is required. 388 389 @raise ValueError: If one of the validations fails. 390 """ 391 if self.postgresql is None: 392 raise ValueError("PostgreSQL section is required.") 393 if self.postgresql.compressMode is None: 394 raise ValueError("Compress mode value is required.") 395 if self.postgresql.all: 396 if self.postgresql.databases is not None and self.postgresql.databases != []: 397 raise ValueError("Databases cannot be specified if 'all' flag is set.") 398 else: 399 if self.postgresql.databases is None or len(self.postgresql.databases) < 1: 400 raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.")
    401
    402 - def addConfig(self, xmlDom, parentNode):
    403 """ 404 Adds a <postgresql> configuration section as the next child of a parent. 405 406 Third parties should use this function to write configuration related to 407 this extension. 408 409 We add the following fields to the document:: 410 411 user //cb_config/postgresql/user 412 compressMode //cb_config/postgresql/compress_mode 413 all //cb_config/postgresql/all 414 415 We also add groups of the following items, one list element per 416 item:: 417 418 database //cb_config/postgresql/database 419 420 @param xmlDom: DOM tree as from C{impl.createDocument()}. 421 @param parentNode: Parent that the section should be appended to. 422 """ 423 if self.postgresql is not None: 424 sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") 425 addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) 426 addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) 427 addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) 428 if self.postgresql.databases is not None: 429 for database in self.postgresql.databases: 430 addStringNode(xmlDom, sectionNode, "database", database)
    431
    432 - def _parseXmlData(self, xmlData):
    433 """ 434 Internal method to parse an XML string into the object. 435 436 This method parses the XML document into a DOM tree (C{xmlDom}) and then 437 calls a static method to parse the postgresql configuration section. 438 439 @param xmlData: XML data to be parsed 440 @type xmlData: String data 441 442 @raise ValueError: If the XML cannot be successfully parsed. 443 """ 444 (xmlDom, parentNode) = createInputDom(xmlData) 445 self._postgresql = LocalConfig._parsePostgresql(parentNode)
    446 447 @staticmethod
    448 - def _parsePostgresql(parent):
    449 """ 450 Parses a postgresql configuration section. 451 452 We read the following fields:: 453 454 user //cb_config/postgresql/user 455 compressMode //cb_config/postgresql/compress_mode 456 all //cb_config/postgresql/all 457 458 We also read groups of the following item, one list element per 459 item:: 460 461 databases //cb_config/postgresql/database 462 463 @param parent: Parent node to search beneath. 464 465 @return: C{PostgresqlConfig} object or C{None} if the section does not exist. 466 @raise ValueError: If some filled-in value is invalid. 467 """ 468 postgresql = None 469 section = readFirstChild(parent, "postgresql") 470 if section is not None: 471 postgresql = PostgresqlConfig() 472 postgresql.user = readString(section, "user") 473 postgresql.compressMode = readString(section, "compress_mode") 474 postgresql.all = readBoolean(section, "all") 475 postgresql.databases = readStringList(section, "database") 476 return postgresql
    477
    478 479 ######################################################################## 480 # Public functions 481 ######################################################################## 482 483 ########################### 484 # executeAction() function 485 ########################### 486 487 -def executeAction(configPath, options, config):
    488 """ 489 Executes the PostgreSQL backup action. 490 491 @param configPath: Path to configuration file on disk. 492 @type configPath: String representing a path on disk. 493 494 @param options: Program command-line options. 495 @type options: Options object. 496 497 @param config: Program configuration. 498 @type config: Config object. 499 500 @raise ValueError: Under many generic error conditions 501 @raise IOError: If a backup could not be written for some reason. 502 """ 503 logger.debug("Executing PostgreSQL extended action.") 504 if config.options is None or config.collect is None: 505 raise ValueError("Cedar Backup configuration is not properly filled in.") 506 local = LocalConfig(xmlPath=configPath) 507 if local.postgresql.all: 508 logger.info("Backing up all databases.") 509 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 510 config.options.backupUser, config.options.backupGroup, None) 511 if local.postgresql.databases is not None and local.postgresql.databases != []: 512 logger.debug("Backing up %d individual databases." % len(local.postgresql.databases)) 513 for database in local.postgresql.databases: 514 logger.info("Backing up database [%s]." % database) 515 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 516 config.options.backupUser, config.options.backupGroup, database) 517 logger.info("Executed the PostgreSQL extended action successfully.")
    518
    519 -def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None):
    520 """ 521 Backs up an individual PostgreSQL database, or all databases. 522 523 This internal method wraps the public method and adds some functionality, 524 like figuring out a filename, etc. 525 526 @param targetDir: Directory into which backups should be written. 527 @param compressMode: Compress mode to be used for backed-up files. 528 @param user: User to use for connecting to the database. 529 @param backupUser: User to own resulting file. 530 @param backupGroup: Group to own resulting file. 531 @param database: Name of database, or C{None} for all databases. 532 533 @return: Name of the generated backup file. 534 535 @raise ValueError: If some value is missing or invalid. 536 @raise IOError: If there is a problem executing the PostgreSQL dump. 537 """ 538 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 539 try: 540 backupDatabase(user, outputFile, database) 541 finally: 542 outputFile.close() 543 if not os.path.exists(filename): 544 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 545 changeOwnership(filename, backupUser, backupGroup)
    546
    547 -def _getOutputFile(targetDir, database, compressMode):
    548 """ 549 Opens the output file used for saving the PostgreSQL dump. 550 551 The filename is either C{"postgresqldump.txt"} or 552 C{"postgresqldump-<database>.txt"}. The C{".gz"} or C{".bz2"} extension is 553 added if C{compress} is C{True}. 554 555 @param targetDir: Target directory to write file in. 556 @param database: Name of the database (if any) 557 @param compressMode: Compress mode to be used for backed-up files. 558 559 @return: Tuple of (Output file object, filename) 560 """ 561 if database is None: 562 filename = os.path.join(targetDir, "postgresqldump.txt") 563 else: 564 filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) 565 if compressMode == "gzip": 566 filename = "%s.gz" % filename 567 outputFile = GzipFile(filename, "w") 568 elif compressMode == "bzip2": 569 filename = "%s.bz2" % filename 570 outputFile = BZ2File(filename, "w") 571 else: 572 outputFile = open(filename, "w") 573 logger.debug("PostgreSQL dump file will be [%s]." % filename) 574 return (outputFile, filename)
    575
    576 577 ############################ 578 # backupDatabase() function 579 ############################ 580 581 -def backupDatabase(user, backupFile, database=None):
    582 """ 583 Backs up an individual PostgreSQL database, or all databases. 584 585 This function backs up either a named local PostgreSQL database or all local 586 PostgreSQL databases, using the passed in user for connectivity. 587 This is I{always} a full backup. There is no facility for incremental 588 backups. 589 590 The backup data will be written into the passed-in back file. Normally, 591 this would be an object as returned from C{open()}, but it is possible to 592 use something like a C{GzipFile} to write compressed output. The caller is 593 responsible for closing the passed-in backup file. 594 595 @note: Typically, you would use the C{root} user to back up all databases. 596 597 @param user: User to use for connecting to the database. 598 @type user: String representing PostgreSQL username. 599 600 @param backupFile: File use for writing backup. 601 @type backupFile: Python file object as from C{open()} or C{file()}. 602 603 @param database: Name of the database to be backed up. 604 @type database: String representing database name, or C{None} for all databases. 605 606 @raise ValueError: If some value is missing or invalid. 607 @raise IOError: If there is a problem executing the PostgreSQL dump. 608 """ 609 args = [] 610 if user is not None: 611 args.append('-U') 612 args.append(user) 613 614 if database is None: 615 command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) 616 else: 617 command = resolveCommand(POSTGRESQLDUMP_COMMAND) 618 args.append(database) 619 620 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 621 if result != 0: 622 if database is None: 623 raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) 624 else: 625 raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database))
    626

    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.actions.store-module.html0000664000175000017500000000414012143054362030225 0ustar pronovicpronovic00000000000000 store

    Module store


    Functions

    consistencyCheck
    executeStore
    writeImage
    writeImageBlankSafe
    writeStoreIndicator

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.cli-module.html0000664000175000017500000001273712143054362026214 0ustar pronovicpronovic00000000000000 cli

    Module cli


    Classes

    Options

    Functions

    cli
    setupLogging
    setupPathResolver

    Variables

    COLLECT_INDEX
    COMBINE_ACTIONS
    DATE_FORMAT
    DEFAULT_CONFIG
    DEFAULT_LOGFILE
    DEFAULT_MODE
    DEFAULT_OWNERSHIP
    DISK_LOG_FORMAT
    DISK_OUTPUT_FORMAT
    INITIALIZE_INDEX
    LONG_SWITCHES
    NONCOMBINE_ACTIONS
    PURGE_INDEX
    REBUILD_INDEX
    SCREEN_LOG_FORMAT
    SCREEN_LOG_STREAM
    SHORT_SWITCHES
    STAGE_INDEX
    STORE_INDEX
    VALIDATE_INDEX
    VALID_ACTIONS
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.constants-pysrc.html0000664000175000017500000002730212143054366030206 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.constants
    Package CedarBackup2 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.constants

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python (>= 2.5) 
    13  # Project  : Cedar Backup, release 2 
    14  # Revision : $Id: constants.py 998 2010-07-07 19:56:08Z pronovic $ 
    15  # Purpose  : Provides common constants used by standard actions. 
    16  # 
    17  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    18   
    19  ######################################################################## 
    20  # Module documentation 
    21  ######################################################################## 
    22   
    23  """ 
    24  Provides common constants used by standard actions. 
    25  @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, 
    26         COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR 
    27  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    28  """ 
    29   
    30  ######################################################################## 
    31  # Module-wide constants and variables 
    32  ######################################################################## 
    33   
    34  DIR_TIME_FORMAT      = "%Y/%m/%d" 
    35  DIGEST_EXTENSION     = "sha" 
    36   
    37  INDICATOR_PATTERN    = [ "cback\..*", ] 
    38  COLLECT_INDICATOR    = "cback.collect" 
    39  STAGE_INDICATOR      = "cback.stage" 
    40  STORE_INDICATOR      = "cback.store" 
    41   
    

    CedarBackup2-2.22.0/doc/interface/crarr.png0000664000175000017500000000052412143054362022072 0ustar pronovicpronovic00000000000000PNG  IHDR eE,tEXtCreation TimeTue 22 Aug 2006 00:43:10 -0500` XtIME)} pHYsnu>gAMA aEPLTEðf4sW ЊrD`@bCܖX{`,lNo@xdE螊dƴ~TwvtRNS@fMIDATxc`@0&+(;; /EXؑ? n  b;'+Y#(r<"IENDB`CedarBackup2-2.22.0/doc/interface/CedarBackup2.actions.constants-module.html0000664000175000017500000001756212143054362030336 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.constants
    Package CedarBackup2 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Module constants

    source code

    Provides common constants used by standard actions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      DIR_TIME_FORMAT = '%Y/%m/%d'
      DIGEST_EXTENSION = 'sha'
      INDICATOR_PATTERN = ['cback\\..*']
      COLLECT_INDICATOR = 'cback.collect'
      STAGE_INDICATOR = 'cback.stage'
      STORE_INDICATOR = 'cback.store'
      __package__ = None
    hash(x)
    CedarBackup2-2.22.0/doc/interface/CedarBackup2.peer.LocalPeer-class.html0000664000175000017500000012504112143054363027274 0ustar pronovicpronovic00000000000000 CedarBackup2.peer.LocalPeer
    Package CedarBackup2 :: Module peer :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Backup peer representing a local peer in a backup pool.

    This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory.

    The public methods other than the constructor are part of a "backup peer" interface shared with the RemotePeer class.

    Instance Methods [hide private]
     
    __init__(self, name, collectDir, ignoreFailureMode=None)
    Initializes a local backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Copies a source file to a target file.
    source code
    Properties [hide private]
      name
    Name of the peer.
      collectDir
    Path to the peer's collect directory (an absolute local path).
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name, collectDir, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a local backup peer.

    Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed.

    Parameters:
    • name (String, typically a hostname) - Name of the backup peer
    • collectDir (String representing an absolute local path on disk) - Path to the peer's collect directory
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If the name is empty.
    • ValueError - If collect directory is not an absolute path.
    Overrides: object.__init__

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back False.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the indicator file should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the indicator file should have
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored.

    Parameters:
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error copying the files.
    • OSError - If there is an OS error copying or changing permissions on a files

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a source file to a target file.

    If the source file is None then the target file will be created or overwritten as an empty file. If the target file is None, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception.

    Parameters:
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • ValueError - If the passed-in source file is not a regular file.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If a path cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.peer-module.html0000664000175000017500000002436212143054362025612 0ustar pronovicpronovic00000000000000 CedarBackup2.peer
    Package CedarBackup2 :: Module peer
    [hide private]
    [frames] | no frames]

    Module peer

    source code

    Provides backup peer-related objects and utility functions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      LocalPeer
    Backup peer representing a local peer in a backup pool.
      RemotePeer
    Backup peer representing a remote peer in a backup pool.
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.peer")
      DEF_RCP_COMMAND = ['/usr/bin/scp', '-B', '-q', '-C']
      DEF_RSH_COMMAND = ['/usr/bin/ssh']
      DEF_CBACK_COMMAND = '/usr/bin/cback'
      DEF_COLLECT_INDICATOR = 'cback.collect'
    Name of the default collect indicator file.
      DEF_STAGE_INDICATOR = 'cback.stage'
    Name of the default stage indicator file.
      SU_COMMAND = ['su']
      __package__ = 'CedarBackup2'
    CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.tools.span-module.html0000664000175000017500000000711212143054362027534 0ustar pronovicpronovic00000000000000 span

    Module span


    Classes

    SpanOptions

    Functions

    cli

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/CedarBackup2.cli._ActionItem-class.html0000664000175000017500000005071112143054362027435 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ActionItem
    Package CedarBackup2 :: Module cli :: Class _ActionItem
    [hide private]
    [frames] | no frames]

    Class _ActionItem

    source code

    object --+
             |
            _ActionItem
    

    Class representing a single action to be executed.

    This class represents a single named action to be executed, and understands how to execute that action.

    The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set).


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, preHook, postHook, function)
    Default constructor.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the action associated with an item, including hooks.
    source code
     
    _executeAction(self, configPath, options, config)
    Executes the action, specifically the function associated with the action.
    source code
     
    _executeHook(self, type, hook)
    Executes a hook command via util.executeCommand().
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 0
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, preHook, postHook, function)
    (Constructor)

    source code 

    Default constructor.

    It's OK to pass None for index, preHook or postHook, but not for name.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • preHook - Pre-action hook in terms of an ActionHook object, or None.
    • postHook - Post-action hook in terms of an ActionHook object, or None.
    • function - Reference to function associated with item.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the action associated with an item, including hooks.

    See class notes for more details on how the action is executed.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.

    _executeAction(self, configPath, options, config)

    source code 

    Executes the action, specifically the function associated with the action.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.

    _executeHook(self, type, hook)

    source code 

    Executes a hook command via util.executeCommand().

    Parameters:
    • type - String describing the type of hook, for logging.
    • hook - Hook, in terms of a ActionHook object.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.dvdwriter.MediaCapacity-class.html0000664000175000017500000004451412143054363032705 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.MediaCapacity
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about DVD media capacity.

    Space used and space available do not include any information about media lead-in or other overhead.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target used to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target available to get the bytes-available value.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target used to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target available to get the bytes-available value.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.PreActionHook-class.html0000664000175000017500000003247012143054362030447 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PreActionHook
    Package CedarBackup2 :: Module config :: Class PreActionHook
    [hide private]
    [frames] | no frames]

    Class PreActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PreActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PreActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PreActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.22.0/doc/interface/epydoc.js0000664000175000017500000002452512143054362022103 0ustar pronovicpronovic00000000000000function toggle_private() { // Search for any private/public links on this page. Store // their old text in "cmd," so we will know what action to // take; and change their text to the opposite action. var cmd = "?"; var elts = document.getElementsByTagName("a"); for(var i=0; i...
    "; elt.innerHTML = s; } } function toggle(id) { elt = document.getElementById(id+"-toggle"); if (elt.innerHTML == "-") collapse(id); else expand(id); return false; } function highlight(id) { var elt = document.getElementById(id+"-def"); if (elt) elt.className = "py-highlight-hdr"; var elt = document.getElementById(id+"-expanded"); if (elt) elt.className = "py-highlight"; var elt = document.getElementById(id+"-collapsed"); if (elt) elt.className = "py-highlight"; } function num_lines(s) { var n = 1; var pos = s.indexOf("\n"); while ( pos > 0) { n += 1; pos = s.indexOf("\n", pos+1); } return n; } // Collapse all blocks that mave more than `min_lines` lines. function collapse_all(min_lines) { var elts = document.getElementsByTagName("div"); for (var i=0; i 0) if (elt.id.substring(split, elt.id.length) == "-expanded") if (num_lines(elt.innerHTML) > min_lines) collapse(elt.id.substring(0, split)); } } function expandto(href) { var start = href.indexOf("#")+1; if (start != 0 && start != href.length) { if (href.substring(start, href.length) != "-") { collapse_all(4); pos = href.indexOf(".", start); while (pos != -1) { var id = href.substring(start, pos); expand(id); pos = href.indexOf(".", pos+1); } var id = href.substring(start, href.length); expand(id); highlight(id); } } } function kill_doclink(id) { var parent = document.getElementById(id); parent.removeChild(parent.childNodes.item(0)); } function auto_kill_doclink(ev) { if (!ev) var ev = window.event; if (!this.contains(ev.toElement)) { var parent = document.getElementById(this.parentID); parent.removeChild(parent.childNodes.item(0)); } } function doclink(id, name, targets_id) { var elt = document.getElementById(id); // If we already opened the box, then destroy it. // (This case should never occur, but leave it in just in case.) if (elt.childNodes.length > 1) { elt.removeChild(elt.childNodes.item(0)); } else { // The outer box: relative + inline positioning. var box1 = document.createElement("div"); box1.style.position = "relative"; box1.style.display = "inline"; box1.style.top = 0; box1.style.left = 0; // A shadow for fun var shadow = document.createElement("div"); shadow.style.position = "absolute"; shadow.style.left = "-1.3em"; shadow.style.top = "-1.3em"; shadow.style.background = "#404040"; // The inner box: absolute positioning. var box2 = document.createElement("div"); box2.style.position = "relative"; box2.style.border = "1px solid #a0a0a0"; box2.style.left = "-.2em"; box2.style.top = "-.2em"; box2.style.background = "white"; box2.style.padding = ".3em .4em .3em .4em"; box2.style.fontStyle = "normal"; box2.onmouseout=auto_kill_doclink; box2.parentID = id; // Get the targets var targets_elt = document.getElementById(targets_id); var targets = targets_elt.getAttribute("targets"); var links = ""; target_list = targets.split(","); for (var i=0; i" + target[0] + ""; } // Put it all together. elt.insertBefore(box1, elt.childNodes.item(0)); //box1.appendChild(box2); box1.appendChild(shadow); shadow.appendChild(box2); box2.innerHTML = "Which "+name+" do you want to see documentation for?" + ""; } return false; } function get_anchor() { var href = location.href; var start = href.indexOf("#")+1; if ((start != 0) && (start != href.length)) return href.substring(start, href.length); } function redirect_url(dottedName) { // Scan through each element of the "pages" list, and check // if "name" matches with any of them. for (var i=0; i-m" or "-c"; // extract the portion & compare it to dottedName. var pagename = pages[i].substring(0, pages[i].length-2); if (pagename == dottedName.substring(0,pagename.length)) { // We've found a page that matches `dottedName`; // construct its URL, using leftover `dottedName` // content to form an anchor. var pagetype = pages[i].charAt(pages[i].length-1); var url = pagename + ((pagetype=="m")?"-module.html": "-class.html"); if (dottedName.length > pagename.length) url += "#" + dottedName.substring(pagename.length+1, dottedName.length); return url; } } } CedarBackup2-2.22.0/doc/interface/toc-CedarBackup2.writers.cdwriter-module.html0000664000175000017500000000503012143054362030752 0ustar pronovicpronovic00000000000000 cdwriter

    Module cdwriter


    Classes

    CdWriter
    MediaCapacity
    MediaDefinition

    Variables

    CDRECORD_COMMAND
    EJECT_COMMAND
    MEDIA_CDRW_74
    MEDIA_CDRW_80
    MEDIA_CDR_74
    MEDIA_CDR_80
    MKISOFS_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.22.0/doc/interface/api-objects.txt0000664000175000017500000061535012143054366023231 0ustar pronovicpronovic00000000000000CedarBackup2 CedarBackup2-module.html CedarBackup2.__package__ CedarBackup2-module.html#__package__ CedarBackup2.action CedarBackup2.action-module.html CedarBackup2.action.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.action.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.action.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.action.__package__ CedarBackup2.action-module.html#__package__ CedarBackup2.action.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.action.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.action.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.actions CedarBackup2.actions-module.html CedarBackup2.actions.__package__ CedarBackup2.actions-module.html#__package__ CedarBackup2.actions.collect CedarBackup2.actions.collect-module.html CedarBackup2.actions.collect._getTarfilePath CedarBackup2.actions.collect-module.html#_getTarfilePath CedarBackup2.actions.collect._getCollectMode CedarBackup2.actions.collect-module.html#_getCollectMode CedarBackup2.actions.collect._getArchiveMode CedarBackup2.actions.collect-module.html#_getArchiveMode CedarBackup2.actions.collect._writeDigest CedarBackup2.actions.collect-module.html#_writeDigest CedarBackup2.actions.collect.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.collect.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.collect.__package__ CedarBackup2.actions.collect-module.html#__package__ CedarBackup2.actions.collect._executeBackup CedarBackup2.actions.collect-module.html#_executeBackup CedarBackup2.actions.collect._loadDigest CedarBackup2.actions.collect-module.html#_loadDigest CedarBackup2.actions.collect._collectFile CedarBackup2.actions.collect-module.html#_collectFile CedarBackup2.actions.collect.logger CedarBackup2.actions.collect-module.html#logger CedarBackup2.actions.collect.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.actions.collect._getDereference CedarBackup2.actions.collect-module.html#_getDereference CedarBackup2.actions.collect.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.collect._getLinkDepth CedarBackup2.actions.collect-module.html#_getLinkDepth CedarBackup2.actions.collect._getRecursionLevel CedarBackup2.actions.collect-module.html#_getRecursionLevel CedarBackup2.actions.collect.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.actions.collect._getIgnoreFile CedarBackup2.actions.collect-module.html#_getIgnoreFile CedarBackup2.actions.collect._getExclusions CedarBackup2.actions.collect-module.html#_getExclusions CedarBackup2.actions.collect._collectDirectory CedarBackup2.actions.collect-module.html#_collectDirectory CedarBackup2.actions.collect._getDigestPath CedarBackup2.actions.collect-module.html#_getDigestPath CedarBackup2.actions.collect.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.actions.constants CedarBackup2.actions.constants-module.html CedarBackup2.actions.constants.INDICATOR_PATTERN CedarBackup2.actions.constants-module.html#INDICATOR_PATTERN CedarBackup2.actions.constants.STAGE_INDICATOR CedarBackup2.actions.constants-module.html#STAGE_INDICATOR CedarBackup2.actions.constants.STORE_INDICATOR CedarBackup2.actions.constants-module.html#STORE_INDICATOR CedarBackup2.actions.constants.DIR_TIME_FORMAT CedarBackup2.actions.constants-module.html#DIR_TIME_FORMAT CedarBackup2.actions.constants.__package__ CedarBackup2.actions.constants-module.html#__package__ CedarBackup2.actions.constants.COLLECT_INDICATOR CedarBackup2.actions.constants-module.html#COLLECT_INDICATOR CedarBackup2.actions.constants.DIGEST_EXTENSION CedarBackup2.actions.constants-module.html#DIGEST_EXTENSION CedarBackup2.actions.initialize CedarBackup2.actions.initialize-module.html CedarBackup2.actions.initialize.logger CedarBackup2.actions.initialize-module.html#logger CedarBackup2.actions.initialize.initializeMediaState CedarBackup2.actions.util-module.html#initializeMediaState CedarBackup2.actions.initialize.executeInitialize CedarBackup2.actions.initialize-module.html#executeInitialize CedarBackup2.actions.initialize.__package__ CedarBackup2.actions.initialize-module.html#__package__ CedarBackup2.actions.purge CedarBackup2.actions.purge-module.html CedarBackup2.actions.purge.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.actions.purge.logger CedarBackup2.actions.purge-module.html#logger CedarBackup2.actions.purge.__package__ CedarBackup2.actions.purge-module.html#__package__ CedarBackup2.actions.rebuild CedarBackup2.actions.rebuild-module.html CedarBackup2.actions.rebuild.writeStoreIndicator CedarBackup2.actions.store-module.html#writeStoreIndicator CedarBackup2.actions.rebuild.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.actions.rebuild.writeImage CedarBackup2.actions.store-module.html#writeImage CedarBackup2.actions.rebuild.__package__ CedarBackup2.actions.rebuild-module.html#__package__ CedarBackup2.actions.rebuild.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.rebuild._findRebuildDirs CedarBackup2.actions.rebuild-module.html#_findRebuildDirs CedarBackup2.actions.rebuild.deriveDayOfWeek CedarBackup2.util-module.html#deriveDayOfWeek CedarBackup2.actions.rebuild.consistencyCheck CedarBackup2.actions.store-module.html#consistencyCheck CedarBackup2.actions.rebuild.logger CedarBackup2.actions.rebuild-module.html#logger CedarBackup2.actions.stage CedarBackup2.actions.stage-module.html CedarBackup2.actions.stage._getRcpCommand CedarBackup2.actions.stage-module.html#_getRcpCommand CedarBackup2.actions.stage._getLocalUser CedarBackup2.actions.stage-module.html#_getLocalUser CedarBackup2.actions.stage._getRemotePeers CedarBackup2.actions.stage-module.html#_getRemotePeers CedarBackup2.actions.stage.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.actions.stage._createStagingDirs CedarBackup2.actions.stage-module.html#_createStagingDirs CedarBackup2.actions.stage.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.stage.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.actions.stage.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.stage.__package__ CedarBackup2.actions.stage-module.html#__package__ CedarBackup2.actions.stage.logger CedarBackup2.actions.stage-module.html#logger CedarBackup2.actions.stage._getLocalPeers CedarBackup2.actions.stage-module.html#_getLocalPeers CedarBackup2.actions.stage.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.stage._getDailyDir CedarBackup2.actions.stage-module.html#_getDailyDir CedarBackup2.actions.stage.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.actions.stage._getIgnoreFailuresFlag CedarBackup2.actions.stage-module.html#_getIgnoreFailuresFlag CedarBackup2.actions.stage._getRemoteUser CedarBackup2.actions.stage-module.html#_getRemoteUser CedarBackup2.actions.store CedarBackup2.actions.store-module.html CedarBackup2.actions.store.writeImage CedarBackup2.actions.store-module.html#writeImage CedarBackup2.actions.store.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.actions.store._getNewDisc CedarBackup2.actions.store-module.html#_getNewDisc CedarBackup2.actions.store.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.store.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.store.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.store.unmount CedarBackup2.util-module.html#unmount CedarBackup2.actions.store.__package__ CedarBackup2.actions.store-module.html#__package__ CedarBackup2.actions.store.writeStoreIndicator CedarBackup2.actions.store-module.html#writeStoreIndicator CedarBackup2.actions.store.logger CedarBackup2.actions.store-module.html#logger CedarBackup2.actions.store.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.actions.store.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.store._findCorrectDailyDir CedarBackup2.actions.store-module.html#_findCorrectDailyDir CedarBackup2.actions.store.writeImageBlankSafe CedarBackup2.actions.store-module.html#writeImageBlankSafe CedarBackup2.actions.store.buildMediaLabel CedarBackup2.actions.util-module.html#buildMediaLabel CedarBackup2.actions.store.compareContents CedarBackup2.filesystem-module.html#compareContents CedarBackup2.actions.store.consistencyCheck CedarBackup2.actions.store-module.html#consistencyCheck CedarBackup2.actions.store.mount CedarBackup2.util-module.html#mount CedarBackup2.actions.util CedarBackup2.actions.util-module.html CedarBackup2.actions.util.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.actions.util.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.util.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.util.__package__ CedarBackup2.actions.util-module.html#__package__ CedarBackup2.actions.util.readMediaLabel CedarBackup2.writers.util-module.html#readMediaLabel CedarBackup2.actions.util.logger CedarBackup2.actions.util-module.html#logger CedarBackup2.actions.util._getMediaType CedarBackup2.actions.util-module.html#_getMediaType CedarBackup2.actions.util._getDeviceType CedarBackup2.actions.util-module.html#_getDeviceType CedarBackup2.actions.util.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.util.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.actions.util.MEDIA_LABEL_PREFIX CedarBackup2.actions.util-module.html#MEDIA_LABEL_PREFIX CedarBackup2.actions.util.deviceMounted CedarBackup2.util-module.html#deviceMounted CedarBackup2.actions.util.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.util.buildMediaLabel CedarBackup2.actions.util-module.html#buildMediaLabel CedarBackup2.actions.util.initializeMediaState CedarBackup2.actions.util-module.html#initializeMediaState CedarBackup2.actions.validate CedarBackup2.actions.validate-module.html CedarBackup2.actions.validate._checkDir CedarBackup2.actions.validate-module.html#_checkDir CedarBackup2.actions.validate._validatePurge CedarBackup2.actions.validate-module.html#_validatePurge CedarBackup2.actions.validate._validateReference CedarBackup2.actions.validate-module.html#_validateReference CedarBackup2.actions.validate._validateStage CedarBackup2.actions.validate-module.html#_validateStage CedarBackup2.actions.validate._validateOptions CedarBackup2.actions.validate-module.html#_validateOptions CedarBackup2.actions.validate.__package__ CedarBackup2.actions.validate-module.html#__package__ CedarBackup2.actions.validate.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.actions.validate._validateExtensions CedarBackup2.actions.validate-module.html#_validateExtensions CedarBackup2.actions.validate._validateCollect CedarBackup2.actions.validate-module.html#_validateCollect CedarBackup2.actions.validate.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.actions.validate.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.actions.validate._validateStore CedarBackup2.actions.validate-module.html#_validateStore CedarBackup2.actions.validate.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.validate.logger CedarBackup2.actions.validate-module.html#logger CedarBackup2.cli CedarBackup2.cli-module.html CedarBackup2.cli.SHORT_SWITCHES CedarBackup2.cli-module.html#SHORT_SWITCHES CedarBackup2.cli.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.cli.LONG_SWITCHES CedarBackup2.cli-module.html#LONG_SWITCHES CedarBackup2.cli.DISK_LOG_FORMAT CedarBackup2.cli-module.html#DISK_LOG_FORMAT CedarBackup2.cli.DEFAULT_LOGFILE CedarBackup2.cli-module.html#DEFAULT_LOGFILE CedarBackup2.cli.DEFAULT_MODE CedarBackup2.cli-module.html#DEFAULT_MODE CedarBackup2.cli.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.cli._usage CedarBackup2.cli-module.html#_usage CedarBackup2.cli.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.cli._setupDiskOutputLogging CedarBackup2.cli-module.html#_setupDiskOutputLogging CedarBackup2.cli.cli CedarBackup2.cli-module.html#cli CedarBackup2.cli.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.cli.sortDict CedarBackup2.util-module.html#sortDict CedarBackup2.cli.__package__ CedarBackup2.cli-module.html#__package__ CedarBackup2.cli.DISK_OUTPUT_FORMAT CedarBackup2.cli-module.html#DISK_OUTPUT_FORMAT CedarBackup2.cli.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.cli.VALIDATE_INDEX CedarBackup2.cli-module.html#VALIDATE_INDEX CedarBackup2.cli.executeInitialize CedarBackup2.actions.initialize-module.html#executeInitialize CedarBackup2.cli.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.cli._setupScreenFlowLogging CedarBackup2.cli-module.html#_setupScreenFlowLogging CedarBackup2.cli.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.cli.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.cli.logger CedarBackup2.cli-module.html#logger CedarBackup2.cli.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.cli.NONCOMBINE_ACTIONS CedarBackup2.cli-module.html#NONCOMBINE_ACTIONS CedarBackup2.cli._setupLogfile CedarBackup2.cli-module.html#_setupLogfile CedarBackup2.cli.STAGE_INDEX CedarBackup2.cli-module.html#STAGE_INDEX CedarBackup2.cli._setupOutputLogging CedarBackup2.cli-module.html#_setupOutputLogging CedarBackup2.cli.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.cli.STORE_INDEX CedarBackup2.cli-module.html#STORE_INDEX CedarBackup2.cli.COLLECT_INDEX CedarBackup2.cli-module.html#COLLECT_INDEX CedarBackup2.cli.SCREEN_LOG_STREAM CedarBackup2.cli-module.html#SCREEN_LOG_STREAM CedarBackup2.cli.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.cli.COMBINE_ACTIONS CedarBackup2.cli-module.html#COMBINE_ACTIONS CedarBackup2.cli.DEFAULT_CONFIG CedarBackup2.cli-module.html#DEFAULT_CONFIG CedarBackup2.cli.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.cli.DEFAULT_OWNERSHIP CedarBackup2.cli-module.html#DEFAULT_OWNERSHIP CedarBackup2.cli.DATE_FORMAT CedarBackup2.cli-module.html#DATE_FORMAT CedarBackup2.cli.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.cli.SCREEN_LOG_FORMAT CedarBackup2.cli-module.html#SCREEN_LOG_FORMAT CedarBackup2.cli.setupLogging CedarBackup2.cli-module.html#setupLogging CedarBackup2.cli._diagnostics CedarBackup2.cli-module.html#_diagnostics CedarBackup2.cli.INITIALIZE_INDEX CedarBackup2.cli-module.html#INITIALIZE_INDEX CedarBackup2.cli._version CedarBackup2.cli-module.html#_version CedarBackup2.cli.PURGE_INDEX CedarBackup2.cli-module.html#PURGE_INDEX CedarBackup2.cli.REBUILD_INDEX CedarBackup2.cli-module.html#REBUILD_INDEX CedarBackup2.cli.VALID_ACTIONS CedarBackup2.cli-module.html#VALID_ACTIONS CedarBackup2.cli._setupFlowLogging CedarBackup2.cli-module.html#_setupFlowLogging CedarBackup2.cli._setupDiskFlowLogging CedarBackup2.cli-module.html#_setupDiskFlowLogging CedarBackup2.config CedarBackup2.config-module.html CedarBackup2.config.VALID_MEDIA_TYPES CedarBackup2.config-module.html#VALID_MEDIA_TYPES CedarBackup2.config.VALID_ORDER_MODES CedarBackup2.config-module.html#VALID_ORDER_MODES CedarBackup2.config.VALID_COLLECT_MODES CedarBackup2.config-module.html#VALID_COLLECT_MODES CedarBackup2.config.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.config.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.config.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.config.REWRITABLE_MEDIA_TYPES CedarBackup2.config-module.html#REWRITABLE_MEDIA_TYPES CedarBackup2.config.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.config.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.config.VALID_ARCHIVE_MODES CedarBackup2.config-module.html#VALID_ARCHIVE_MODES CedarBackup2.config.serializeDom CedarBackup2.xmlutil-module.html#serializeDom CedarBackup2.config.DEFAULT_MEDIA_TYPE CedarBackup2.config-module.html#DEFAULT_MEDIA_TYPE CedarBackup2.config.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.config.VALID_CD_MEDIA_TYPES CedarBackup2.config-module.html#VALID_CD_MEDIA_TYPES CedarBackup2.config.__package__ CedarBackup2.config-module.html#__package__ CedarBackup2.config.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.config.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.config.checkUnique CedarBackup2.util-module.html#checkUnique CedarBackup2.config.readInteger CedarBackup2.xmlutil-module.html#readInteger CedarBackup2.config.parseCommaSeparatedString CedarBackup2.util-module.html#parseCommaSeparatedString CedarBackup2.config.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.config.logger CedarBackup2.config-module.html#logger CedarBackup2.config.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.config.VALID_DEVICE_TYPES CedarBackup2.config-module.html#VALID_DEVICE_TYPES CedarBackup2.config.DEFAULT_DEVICE_TYPE CedarBackup2.config-module.html#DEFAULT_DEVICE_TYPE CedarBackup2.config.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.config.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.config.VALID_FAILURE_MODES CedarBackup2.config-module.html#VALID_FAILURE_MODES CedarBackup2.config.VALID_BYTE_UNITS CedarBackup2.config-module.html#VALID_BYTE_UNITS CedarBackup2.config.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.config.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.config.VALID_BLANK_MODES CedarBackup2.config-module.html#VALID_BLANK_MODES CedarBackup2.config.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.config.VALID_COMPRESS_MODES CedarBackup2.config-module.html#VALID_COMPRESS_MODES CedarBackup2.config.ACTION_NAME_REGEX CedarBackup2.config-module.html#ACTION_NAME_REGEX CedarBackup2.config.createOutputDom CedarBackup2.xmlutil-module.html#createOutputDom CedarBackup2.config.VALID_DVD_MEDIA_TYPES CedarBackup2.config-module.html#VALID_DVD_MEDIA_TYPES CedarBackup2.config.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.config.addIntegerNode CedarBackup2.xmlutil-module.html#addIntegerNode CedarBackup2.customize CedarBackup2.customize-module.html CedarBackup2.customize.DEBIAN_MKISOFS CedarBackup2.customize-module.html#DEBIAN_MKISOFS CedarBackup2.customize.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.customize.__package__ CedarBackup2.customize-module.html#__package__ CedarBackup2.customize.PLATFORM CedarBackup2.customize-module.html#PLATFORM CedarBackup2.customize.DEBIAN_CDRECORD CedarBackup2.customize-module.html#DEBIAN_CDRECORD CedarBackup2.customize.logger CedarBackup2.customize-module.html#logger CedarBackup2.extend CedarBackup2.extend-module.html CedarBackup2.extend.__package__ CedarBackup2.extend-module.html#__package__ CedarBackup2.extend.capacity CedarBackup2.extend.capacity-module.html CedarBackup2.extend.capacity.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.extend.capacity.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.extend.capacity.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.capacity.executeAction CedarBackup2.extend.capacity-module.html#executeAction CedarBackup2.extend.capacity.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.capacity.__package__ CedarBackup2.extend.capacity-module.html#__package__ CedarBackup2.extend.capacity.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.capacity.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.extend.capacity.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.extend.capacity.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.capacity.logger CedarBackup2.extend.capacity-module.html#logger CedarBackup2.extend.capacity.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.capacity.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.extend.encrypt CedarBackup2.extend.encrypt-module.html CedarBackup2.extend.encrypt.executeAction CedarBackup2.extend.encrypt-module.html#executeAction CedarBackup2.extend.encrypt.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.encrypt.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.encrypt.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.extend.encrypt._encryptFile CedarBackup2.extend.encrypt-module.html#_encryptFile CedarBackup2.extend.encrypt.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.encrypt.__package__ CedarBackup2.extend.encrypt-module.html#__package__ CedarBackup2.extend.encrypt.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.encrypt.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.encrypt._encryptDailyDir CedarBackup2.extend.encrypt-module.html#_encryptDailyDir CedarBackup2.extend.encrypt.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.extend.encrypt.logger CedarBackup2.extend.encrypt-module.html#logger CedarBackup2.extend.encrypt.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.encrypt.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.extend.encrypt.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.encrypt.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.encrypt.VALID_ENCRYPT_MODES CedarBackup2.extend.encrypt-module.html#VALID_ENCRYPT_MODES CedarBackup2.extend.encrypt._confirmGpgRecipient CedarBackup2.extend.encrypt-module.html#_confirmGpgRecipient CedarBackup2.extend.encrypt.GPG_COMMAND CedarBackup2.extend.encrypt-module.html#GPG_COMMAND CedarBackup2.extend.encrypt.ENCRYPT_INDICATOR CedarBackup2.extend.encrypt-module.html#ENCRYPT_INDICATOR CedarBackup2.extend.encrypt._encryptFileWithGpg CedarBackup2.extend.encrypt-module.html#_encryptFileWithGpg CedarBackup2.extend.mbox CedarBackup2.extend.mbox-module.html CedarBackup2.extend.mbox._getTarfilePath CedarBackup2.extend.mbox-module.html#_getTarfilePath CedarBackup2.extend.mbox._getCollectMode CedarBackup2.extend.mbox-module.html#_getCollectMode CedarBackup2.extend.mbox._getExclusions CedarBackup2.extend.mbox-module.html#_getExclusions CedarBackup2.extend.mbox.executeAction CedarBackup2.extend.mbox-module.html#executeAction CedarBackup2.extend.mbox._getOutputFile CedarBackup2.extend.mbox-module.html#_getOutputFile CedarBackup2.extend.mbox.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.mbox.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.mbox.GREPMAIL_COMMAND CedarBackup2.extend.mbox-module.html#GREPMAIL_COMMAND CedarBackup2.extend.mbox.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.extend.mbox.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.mbox._getRevisionPath CedarBackup2.extend.mbox-module.html#_getRevisionPath CedarBackup2.extend.mbox.__package__ CedarBackup2.extend.mbox-module.html#__package__ CedarBackup2.extend.mbox.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.mbox.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.mbox.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.extend.mbox.logger CedarBackup2.extend.mbox-module.html#logger CedarBackup2.extend.mbox._backupMboxDir CedarBackup2.extend.mbox-module.html#_backupMboxDir CedarBackup2.extend.mbox._backupMboxFile CedarBackup2.extend.mbox-module.html#_backupMboxFile CedarBackup2.extend.mbox.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.mbox.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.mbox._getBackupPath CedarBackup2.extend.mbox-module.html#_getBackupPath CedarBackup2.extend.mbox.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.extend.mbox.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.extend.mbox.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.extend.mbox.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.mbox._getCompressMode CedarBackup2.extend.mbox-module.html#_getCompressMode CedarBackup2.extend.mbox._writeNewRevision CedarBackup2.extend.mbox-module.html#_writeNewRevision CedarBackup2.extend.mbox._loadLastRevision CedarBackup2.extend.mbox-module.html#_loadLastRevision CedarBackup2.extend.mbox.REVISION_PATH_EXTENSION CedarBackup2.extend.mbox-module.html#REVISION_PATH_EXTENSION CedarBackup2.extend.mbox.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.mysql CedarBackup2.extend.mysql-module.html CedarBackup2.extend.mysql.executeAction CedarBackup2.extend.mysql-module.html#executeAction CedarBackup2.extend.mysql.MYSQLDUMP_COMMAND CedarBackup2.extend.mysql-module.html#MYSQLDUMP_COMMAND CedarBackup2.extend.mysql.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.mysql.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.mysql.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.mysql.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.extend.mysql.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.mysql.__package__ CedarBackup2.extend.mysql-module.html#__package__ CedarBackup2.extend.mysql.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.mysql.logger CedarBackup2.extend.mysql-module.html#logger CedarBackup2.extend.mysql.backupDatabase CedarBackup2.extend.mysql-module.html#backupDatabase CedarBackup2.extend.mysql._getOutputFile CedarBackup2.extend.mysql-module.html#_getOutputFile CedarBackup2.extend.mysql.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.mysql.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.mysql.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.mysql.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.mysql._backupDatabase CedarBackup2.extend.mysql-module.html#_backupDatabase CedarBackup2.extend.mysql.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.extend.postgresql CedarBackup2.extend.postgresql-module.html CedarBackup2.extend.postgresql.executeAction CedarBackup2.extend.postgresql-module.html#executeAction CedarBackup2.extend.postgresql.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.postgresql.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.postgresql.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.postgresql.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.extend.postgresql.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.postgresql.__package__ CedarBackup2.extend.postgresql-module.html#__package__ CedarBackup2.extend.postgresql.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.postgresql.logger CedarBackup2.extend.postgresql-module.html#logger CedarBackup2.extend.postgresql.backupDatabase CedarBackup2.extend.postgresql-module.html#backupDatabase CedarBackup2.extend.postgresql._getOutputFile CedarBackup2.extend.postgresql-module.html#_getOutputFile CedarBackup2.extend.postgresql.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.postgresql.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.postgresql.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.postgresql.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.postgresql.POSTGRESQLDUMP_COMMAND CedarBackup2.extend.postgresql-module.html#POSTGRESQLDUMP_COMMAND CedarBackup2.extend.postgresql._backupDatabase CedarBackup2.extend.postgresql-module.html#_backupDatabase CedarBackup2.extend.postgresql.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.extend.postgresql.POSTGRESQLDUMPALL_COMMAND CedarBackup2.extend.postgresql-module.html#POSTGRESQLDUMPALL_COMMAND CedarBackup2.extend.split CedarBackup2.extend.split-module.html CedarBackup2.extend.split.executeAction CedarBackup2.extend.split-module.html#executeAction CedarBackup2.extend.split.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.split.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.extend.split._splitFile CedarBackup2.extend.split-module.html#_splitFile CedarBackup2.extend.split.SPLIT_COMMAND CedarBackup2.extend.split-module.html#SPLIT_COMMAND CedarBackup2.extend.split.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.split.__package__ CedarBackup2.extend.split-module.html#__package__ CedarBackup2.extend.split.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.split.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.split.logger CedarBackup2.extend.split-module.html#logger CedarBackup2.extend.split._splitDailyDir CedarBackup2.extend.split-module.html#_splitDailyDir CedarBackup2.extend.split.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.extend.split.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.split.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.extend.split.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.split.SPLIT_INDICATOR CedarBackup2.extend.split-module.html#SPLIT_INDICATOR CedarBackup2.extend.split.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.extend.split.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.extend.subversion CedarBackup2.extend.subversion-module.html CedarBackup2.extend.subversion._getCollectMode CedarBackup2.extend.subversion-module.html#_getCollectMode CedarBackup2.extend.subversion.SVNADMIN_COMMAND CedarBackup2.extend.subversion-module.html#SVNADMIN_COMMAND CedarBackup2.extend.subversion._getExclusions CedarBackup2.extend.subversion-module.html#_getExclusions CedarBackup2.extend.subversion.executeAction CedarBackup2.extend.subversion-module.html#executeAction CedarBackup2.extend.subversion.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.subversion.SVNLOOK_COMMAND CedarBackup2.extend.subversion-module.html#SVNLOOK_COMMAND CedarBackup2.extend.subversion.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.subversion.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.subversion.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.extend.subversion._getRepositoryPaths CedarBackup2.extend.subversion-module.html#_getRepositoryPaths CedarBackup2.extend.subversion._getRevisionPath CedarBackup2.extend.subversion-module.html#_getRevisionPath CedarBackup2.extend.subversion.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.subversion.__package__ CedarBackup2.extend.subversion-module.html#__package__ CedarBackup2.extend.subversion.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.subversion.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.extend.subversion.logger CedarBackup2.extend.subversion-module.html#logger CedarBackup2.extend.subversion.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.subversion._getOutputFile CedarBackup2.extend.subversion-module.html#_getOutputFile CedarBackup2.extend.subversion.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.subversion.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.subversion.backupRepository CedarBackup2.extend.subversion-module.html#backupRepository CedarBackup2.extend.subversion._getBackupPath CedarBackup2.extend.subversion-module.html#_getBackupPath CedarBackup2.extend.subversion.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.extend.subversion.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.extend.subversion.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.extend.subversion.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.subversion.getYoungestRevision CedarBackup2.extend.subversion-module.html#getYoungestRevision CedarBackup2.extend.subversion._writeLastRevision CedarBackup2.extend.subversion-module.html#_writeLastRevision CedarBackup2.extend.subversion._getCompressMode CedarBackup2.extend.subversion-module.html#_getCompressMode CedarBackup2.extend.subversion.backupBDBRepository CedarBackup2.extend.subversion-module.html#backupBDBRepository CedarBackup2.extend.subversion._backupRepository CedarBackup2.extend.subversion-module.html#_backupRepository CedarBackup2.extend.subversion._loadLastRevision CedarBackup2.extend.subversion-module.html#_loadLastRevision CedarBackup2.extend.subversion.REVISION_PATH_EXTENSION CedarBackup2.extend.subversion-module.html#REVISION_PATH_EXTENSION CedarBackup2.extend.subversion.backupFSFSRepository CedarBackup2.extend.subversion-module.html#backupFSFSRepository CedarBackup2.extend.sysinfo CedarBackup2.extend.sysinfo-module.html CedarBackup2.extend.sysinfo._getOutputFile CedarBackup2.extend.sysinfo-module.html#_getOutputFile CedarBackup2.extend.sysinfo.logger CedarBackup2.extend.sysinfo-module.html#logger CedarBackup2.extend.sysinfo.DPKG_PATH CedarBackup2.extend.sysinfo-module.html#DPKG_PATH CedarBackup2.extend.sysinfo.FDISK_COMMAND CedarBackup2.extend.sysinfo-module.html#FDISK_COMMAND CedarBackup2.extend.sysinfo.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.sysinfo._dumpPartitionTable CedarBackup2.extend.sysinfo-module.html#_dumpPartitionTable CedarBackup2.extend.sysinfo.executeAction CedarBackup2.extend.sysinfo-module.html#executeAction CedarBackup2.extend.sysinfo.DPKG_COMMAND CedarBackup2.extend.sysinfo-module.html#DPKG_COMMAND CedarBackup2.extend.sysinfo.LS_COMMAND CedarBackup2.extend.sysinfo-module.html#LS_COMMAND CedarBackup2.extend.sysinfo._dumpFilesystemContents CedarBackup2.extend.sysinfo-module.html#_dumpFilesystemContents CedarBackup2.extend.sysinfo.__package__ CedarBackup2.extend.sysinfo-module.html#__package__ CedarBackup2.extend.sysinfo.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.sysinfo._dumpDebianPackages CedarBackup2.extend.sysinfo-module.html#_dumpDebianPackages CedarBackup2.extend.sysinfo.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.sysinfo.FDISK_PATH CedarBackup2.extend.sysinfo-module.html#FDISK_PATH CedarBackup2.filesystem CedarBackup2.filesystem-module.html CedarBackup2.filesystem.normalizeDir CedarBackup2.filesystem-module.html#normalizeDir CedarBackup2.filesystem.firstFit CedarBackup2.knapsack-module.html#firstFit CedarBackup2.filesystem.calculateFileAge CedarBackup2.util-module.html#calculateFileAge CedarBackup2.filesystem.removeKeys CedarBackup2.util-module.html#removeKeys CedarBackup2.filesystem.alternateFit CedarBackup2.knapsack-module.html#alternateFit CedarBackup2.filesystem.__package__ CedarBackup2.filesystem-module.html#__package__ CedarBackup2.filesystem.worstFit CedarBackup2.knapsack-module.html#worstFit CedarBackup2.filesystem.logger CedarBackup2.filesystem-module.html#logger CedarBackup2.filesystem.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.filesystem.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.filesystem.bestFit CedarBackup2.knapsack-module.html#bestFit CedarBackup2.filesystem.compareDigestMaps CedarBackup2.filesystem-module.html#compareDigestMaps CedarBackup2.filesystem.compareContents CedarBackup2.filesystem-module.html#compareContents CedarBackup2.filesystem.dereferenceLink CedarBackup2.util-module.html#dereferenceLink CedarBackup2.image CedarBackup2.image-module.html CedarBackup2.image.__package__ CedarBackup2.image-module.html#__package__ CedarBackup2.knapsack CedarBackup2.knapsack-module.html CedarBackup2.knapsack.bestFit CedarBackup2.knapsack-module.html#bestFit CedarBackup2.knapsack.firstFit CedarBackup2.knapsack-module.html#firstFit CedarBackup2.knapsack.alternateFit CedarBackup2.knapsack-module.html#alternateFit CedarBackup2.knapsack.worstFit CedarBackup2.knapsack-module.html#worstFit CedarBackup2.knapsack.__package__ CedarBackup2.knapsack-module.html#__package__ CedarBackup2.peer CedarBackup2.peer-module.html CedarBackup2.peer.SU_COMMAND CedarBackup2.peer-module.html#SU_COMMAND CedarBackup2.peer.DEF_CBACK_COMMAND CedarBackup2.peer-module.html#DEF_CBACK_COMMAND CedarBackup2.peer.DEF_RSH_COMMAND CedarBackup2.peer-module.html#DEF_RSH_COMMAND CedarBackup2.peer.DEF_STAGE_INDICATOR CedarBackup2.peer-module.html#DEF_STAGE_INDICATOR CedarBackup2.peer.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.peer.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.peer.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.peer.__package__ CedarBackup2.peer-module.html#__package__ CedarBackup2.peer.DEF_COLLECT_INDICATOR CedarBackup2.peer-module.html#DEF_COLLECT_INDICATOR CedarBackup2.peer.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.peer.DEF_RCP_COMMAND CedarBackup2.peer-module.html#DEF_RCP_COMMAND CedarBackup2.peer.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.peer.logger CedarBackup2.peer-module.html#logger CedarBackup2.release CedarBackup2.release-module.html CedarBackup2.release.COPYRIGHT CedarBackup2.release-module.html#COPYRIGHT CedarBackup2.release.AUTHOR CedarBackup2.release-module.html#AUTHOR CedarBackup2.release.URL CedarBackup2.release-module.html#URL CedarBackup2.release.__package__ CedarBackup2.release-module.html#__package__ CedarBackup2.release.VERSION CedarBackup2.release-module.html#VERSION CedarBackup2.release.DATE CedarBackup2.release-module.html#DATE CedarBackup2.release.EMAIL CedarBackup2.release-module.html#EMAIL CedarBackup2.testutil CedarBackup2.testutil-module.html CedarBackup2.testutil.changeFileAge CedarBackup2.testutil-module.html#changeFileAge CedarBackup2.testutil.platformCygwin CedarBackup2.testutil-module.html#platformCygwin CedarBackup2.testutil.platformHasEcho CedarBackup2.testutil-module.html#platformHasEcho CedarBackup2.testutil.randomFilename CedarBackup2.testutil-module.html#randomFilename CedarBackup2.testutil.getLogin CedarBackup2.testutil-module.html#getLogin CedarBackup2.testutil.buildPath CedarBackup2.testutil-module.html#buildPath CedarBackup2.testutil._isPlatform CedarBackup2.testutil-module.html#_isPlatform CedarBackup2.testutil.platformDebian CedarBackup2.testutil-module.html#platformDebian CedarBackup2.testutil.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.testutil.platformSupportsPermissions CedarBackup2.testutil-module.html#platformSupportsPermissions CedarBackup2.testutil.findResources CedarBackup2.testutil-module.html#findResources CedarBackup2.testutil.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.testutil.captureOutput CedarBackup2.testutil-module.html#captureOutput CedarBackup2.testutil.setupDebugLogger CedarBackup2.testutil-module.html#setupDebugLogger CedarBackup2.testutil.__package__ CedarBackup2.testutil-module.html#__package__ CedarBackup2.testutil.extractTar CedarBackup2.testutil-module.html#extractTar CedarBackup2.testutil.platformRequiresBinaryRead CedarBackup2.testutil-module.html#platformRequiresBinaryRead CedarBackup2.testutil.platformSupportsLinks CedarBackup2.testutil-module.html#platformSupportsLinks CedarBackup2.testutil.commandAvailable CedarBackup2.testutil-module.html#commandAvailable CedarBackup2.testutil.platformMacOsX CedarBackup2.testutil-module.html#platformMacOsX CedarBackup2.testutil.getMaskAsMode CedarBackup2.testutil-module.html#getMaskAsMode CedarBackup2.testutil.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.testutil.removedir CedarBackup2.testutil-module.html#removedir CedarBackup2.testutil.availableLocales CedarBackup2.testutil-module.html#availableLocales CedarBackup2.testutil.setupOverrides CedarBackup2.testutil-module.html#setupOverrides CedarBackup2.testutil.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.testutil.runningAsRoot CedarBackup2.testutil-module.html#runningAsRoot CedarBackup2.testutil.platformWindows CedarBackup2.testutil-module.html#platformWindows CedarBackup2.testutil.failUnlessAssignRaises CedarBackup2.testutil-module.html#failUnlessAssignRaises CedarBackup2.testutil.hexFloatLiteralAllowed CedarBackup2.testutil-module.html#hexFloatLiteralAllowed CedarBackup2.tools CedarBackup2.tools-module.html CedarBackup2.tools.__package__ CedarBackup2.tools-module.html#__package__ CedarBackup2.tools.span CedarBackup2.tools.span-module.html CedarBackup2.tools.span._writeDisc CedarBackup2.tools.span-module.html#_writeDisc CedarBackup2.tools.span.normalizeDir CedarBackup2.filesystem-module.html#normalizeDir CedarBackup2.tools.span.compareDigestMaps CedarBackup2.filesystem-module.html#compareDigestMaps CedarBackup2.tools.span._getFloat CedarBackup2.tools.span-module.html#_getFloat CedarBackup2.tools.span._getReturn CedarBackup2.tools.span-module.html#_getReturn CedarBackup2.tools.span._usage CedarBackup2.tools.span-module.html#_usage CedarBackup2.tools.span._getChoiceAnswer CedarBackup2.tools.span-module.html#_getChoiceAnswer CedarBackup2.tools.span.unmount CedarBackup2.util-module.html#unmount CedarBackup2.tools.span._discConsistencyCheck CedarBackup2.tools.span-module.html#_discConsistencyCheck CedarBackup2.tools.span.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.tools.span._findDailyDirs CedarBackup2.tools.span-module.html#_findDailyDirs CedarBackup2.tools.span.__package__ CedarBackup2.tools.span-module.html#__package__ CedarBackup2.tools.span._executeAction CedarBackup2.tools.span-module.html#_executeAction CedarBackup2.tools.span._discInitializeImage CedarBackup2.tools.span-module.html#_discInitializeImage CedarBackup2.tools.span.setupLogging CedarBackup2.cli-module.html#setupLogging CedarBackup2.tools.span._getWriter CedarBackup2.tools.span-module.html#_getWriter CedarBackup2.tools.span.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.tools.span.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.tools.span.logger CedarBackup2.tools.span-module.html#logger CedarBackup2.tools.span._consistencyCheck CedarBackup2.tools.span-module.html#_consistencyCheck CedarBackup2.tools.span._getYesNoAnswer CedarBackup2.tools.span-module.html#_getYesNoAnswer CedarBackup2.tools.span._discWriteImage CedarBackup2.tools.span-module.html#_discWriteImage CedarBackup2.tools.span.cli CedarBackup2.tools.span-module.html#cli CedarBackup2.tools.span.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.tools.span._version CedarBackup2.tools.span-module.html#_version CedarBackup2.tools.span.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.tools.span.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.tools.span.mount CedarBackup2.util-module.html#mount CedarBackup2.tools.span._writeStoreIndicator CedarBackup2.tools.span-module.html#_writeStoreIndicator CedarBackup2.util CedarBackup2.util-module.html CedarBackup2.util.SECONDS_PER_DAY CedarBackup2.util-module.html#SECONDS_PER_DAY CedarBackup2.util.unmount CedarBackup2.util-module.html#unmount CedarBackup2.util.UNIT_BYTES CedarBackup2.util-module.html#UNIT_BYTES CedarBackup2.util.parseCommaSeparatedString CedarBackup2.util-module.html#parseCommaSeparatedString CedarBackup2.util.UNIT_SECTORS CedarBackup2.util-module.html#UNIT_SECTORS CedarBackup2.util.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.util._UID_GID_AVAILABLE CedarBackup2.util-module.html#_UID_GID_AVAILABLE CedarBackup2.util.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.util.deriveDayOfWeek CedarBackup2.util-module.html#deriveDayOfWeek CedarBackup2.util.HOURS_PER_DAY CedarBackup2.util-module.html#HOURS_PER_DAY CedarBackup2.util.BYTES_PER_MBYTE CedarBackup2.util-module.html#BYTES_PER_MBYTE CedarBackup2.util.removeKeys CedarBackup2.util-module.html#removeKeys CedarBackup2.util.deviceMounted CedarBackup2.util-module.html#deviceMounted CedarBackup2.util.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.util.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.util.sanitizeEnvironment CedarBackup2.util-module.html#sanitizeEnvironment CedarBackup2.util.UNIT_MBYTES CedarBackup2.util-module.html#UNIT_MBYTES CedarBackup2.util.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.util.UNIT_KBYTES CedarBackup2.util-module.html#UNIT_KBYTES CedarBackup2.util.DEFAULT_LANGUAGE CedarBackup2.util-module.html#DEFAULT_LANGUAGE CedarBackup2.util.UNIT_GBYTES CedarBackup2.util-module.html#UNIT_GBYTES CedarBackup2.util.__package__ CedarBackup2.util-module.html#__package__ CedarBackup2.util.nullDevice CedarBackup2.util-module.html#nullDevice CedarBackup2.util.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.util.UMOUNT_COMMAND CedarBackup2.util-module.html#UMOUNT_COMMAND CedarBackup2.util.MBYTES_PER_GBYTE CedarBackup2.util-module.html#MBYTES_PER_GBYTE CedarBackup2.util.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.util.MOUNT_COMMAND CedarBackup2.util-module.html#MOUNT_COMMAND CedarBackup2.util.logger CedarBackup2.util-module.html#logger CedarBackup2.util.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.util.SECONDS_PER_MINUTE CedarBackup2.util-module.html#SECONDS_PER_MINUTE CedarBackup2.util.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.util.LOCALE_VARS CedarBackup2.util-module.html#LOCALE_VARS CedarBackup2.util.MTAB_FILE CedarBackup2.util-module.html#MTAB_FILE CedarBackup2.util.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.util.BYTES_PER_SECTOR CedarBackup2.util-module.html#BYTES_PER_SECTOR CedarBackup2.util.KBYTES_PER_MBYTE CedarBackup2.util-module.html#KBYTES_PER_MBYTE CedarBackup2.util.LANG_VAR CedarBackup2.util-module.html#LANG_VAR CedarBackup2.util.MINUTES_PER_HOUR CedarBackup2.util-module.html#MINUTES_PER_HOUR CedarBackup2.util.BYTES_PER_KBYTE CedarBackup2.util-module.html#BYTES_PER_KBYTE CedarBackup2.util.sortDict CedarBackup2.util-module.html#sortDict CedarBackup2.util.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.util.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.util.outputLogger CedarBackup2.util-module.html#outputLogger CedarBackup2.util.BYTES_PER_GBYTE CedarBackup2.util-module.html#BYTES_PER_GBYTE CedarBackup2.util.calculateFileAge CedarBackup2.util-module.html#calculateFileAge CedarBackup2.util.checkUnique CedarBackup2.util-module.html#checkUnique CedarBackup2.util.ISO_SECTOR_SIZE CedarBackup2.util-module.html#ISO_SECTOR_SIZE CedarBackup2.util.mount CedarBackup2.util-module.html#mount CedarBackup2.util.dereferenceLink CedarBackup2.util-module.html#dereferenceLink CedarBackup2.writer CedarBackup2.writer-module.html CedarBackup2.writer.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writer.__package__ CedarBackup2.writer-module.html#__package__ CedarBackup2.writer.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers CedarBackup2.writers-module.html CedarBackup2.writers.__package__ CedarBackup2.writers-module.html#__package__ CedarBackup2.writers.cdwriter CedarBackup2.writers.cdwriter-module.html CedarBackup2.writers.cdwriter.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writers.cdwriter.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers.cdwriter.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.cdwriter.MEDIA_CDRW_80 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDRW_80 CedarBackup2.writers.cdwriter.__package__ CedarBackup2.writers.cdwriter-module.html#__package__ CedarBackup2.writers.cdwriter.CDRECORD_COMMAND CedarBackup2.writers.cdwriter-module.html#CDRECORD_COMMAND CedarBackup2.writers.cdwriter.logger CedarBackup2.writers.cdwriter-module.html#logger CedarBackup2.writers.cdwriter.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.writers.cdwriter.EJECT_COMMAND CedarBackup2.writers.cdwriter-module.html#EJECT_COMMAND CedarBackup2.writers.cdwriter.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.cdwriter.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.cdwriter.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.cdwriter.MEDIA_CDRW_74 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDRW_74 CedarBackup2.writers.cdwriter.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.cdwriter.MKISOFS_COMMAND CedarBackup2.writers.cdwriter-module.html#MKISOFS_COMMAND CedarBackup2.writers.cdwriter.MEDIA_CDR_80 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDR_80 CedarBackup2.writers.cdwriter.MEDIA_CDR_74 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDR_74 CedarBackup2.writers.dvdwriter CedarBackup2.writers.dvdwriter-module.html CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSR CedarBackup2.writers.dvdwriter-module.html#MEDIA_DVDPLUSR CedarBackup2.writers.dvdwriter.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers.dvdwriter.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.dvdwriter.__package__ CedarBackup2.writers.dvdwriter-module.html#__package__ CedarBackup2.writers.dvdwriter.logger CedarBackup2.writers.dvdwriter-module.html#logger CedarBackup2.writers.dvdwriter.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.writers.dvdwriter.EJECT_COMMAND CedarBackup2.writers.dvdwriter-module.html#EJECT_COMMAND CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSRW CedarBackup2.writers.dvdwriter-module.html#MEDIA_DVDPLUSRW CedarBackup2.writers.dvdwriter.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.dvdwriter.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.dvdwriter.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.dvdwriter.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.dvdwriter.GROWISOFS_COMMAND CedarBackup2.writers.dvdwriter-module.html#GROWISOFS_COMMAND CedarBackup2.writers.util CedarBackup2.writers.util-module.html CedarBackup2.writers.util.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.util.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.util.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.util.VOLNAME_COMMAND CedarBackup2.writers.util-module.html#VOLNAME_COMMAND CedarBackup2.writers.util.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writers.util.__package__ CedarBackup2.writers.util-module.html#__package__ CedarBackup2.writers.util.readMediaLabel CedarBackup2.writers.util-module.html#readMediaLabel CedarBackup2.writers.util.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.util.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.util.logger CedarBackup2.writers.util-module.html#logger CedarBackup2.writers.util.MKISOFS_COMMAND CedarBackup2.writers.util-module.html#MKISOFS_COMMAND CedarBackup2.writers.util.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.xmlutil CedarBackup2.xmlutil-module.html CedarBackup2.xmlutil.readFloat CedarBackup2.xmlutil-module.html#readFloat CedarBackup2.xmlutil.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.xmlutil._translateCDATAAttr CedarBackup2.xmlutil-module.html#_translateCDATAAttr CedarBackup2.xmlutil.TRUE_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#TRUE_BOOLEAN_VALUES CedarBackup2.xmlutil.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.xmlutil.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.xmlutil.serializeDom CedarBackup2.xmlutil-module.html#serializeDom CedarBackup2.xmlutil.readInteger CedarBackup2.xmlutil-module.html#readInteger CedarBackup2.xmlutil.VALID_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#VALID_BOOLEAN_VALUES CedarBackup2.xmlutil.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.xmlutil.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.xmlutil.__package__ CedarBackup2.xmlutil-module.html#__package__ CedarBackup2.xmlutil.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.xmlutil.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.xmlutil.logger CedarBackup2.xmlutil-module.html#logger CedarBackup2.xmlutil._encodeText CedarBackup2.xmlutil-module.html#_encodeText CedarBackup2.xmlutil.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.xmlutil.FALSE_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#FALSE_BOOLEAN_VALUES CedarBackup2.xmlutil.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.xmlutil.createOutputDom CedarBackup2.xmlutil-module.html#createOutputDom CedarBackup2.xmlutil.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.xmlutil.addIntegerNode CedarBackup2.xmlutil-module.html#addIntegerNode CedarBackup2.xmlutil._translateCDATA CedarBackup2.xmlutil-module.html#_translateCDATA CedarBackup2.cli.Options CedarBackup2.cli.Options-class.html CedarBackup2.cli.Options._getMode CedarBackup2.cli.Options-class.html#_getMode CedarBackup2.cli.Options.stacktrace CedarBackup2.cli.Options-class.html#stacktrace CedarBackup2.cli.Options.managed CedarBackup2.cli.Options-class.html#managed CedarBackup2.cli.Options.help CedarBackup2.cli.Options-class.html#help CedarBackup2.cli.Options._getFull CedarBackup2.cli.Options-class.html#_getFull CedarBackup2.cli.Options.__str__ CedarBackup2.cli.Options-class.html#__str__ CedarBackup2.cli.Options._setStacktrace CedarBackup2.cli.Options-class.html#_setStacktrace CedarBackup2.cli.Options.actions CedarBackup2.cli.Options-class.html#actions CedarBackup2.cli.Options.owner CedarBackup2.cli.Options-class.html#owner CedarBackup2.cli.Options._setQuiet CedarBackup2.cli.Options-class.html#_setQuiet CedarBackup2.cli.Options._setVersion CedarBackup2.cli.Options-class.html#_setVersion CedarBackup2.cli.Options._getVerbose CedarBackup2.cli.Options-class.html#_getVerbose CedarBackup2.cli.Options.verbose CedarBackup2.cli.Options-class.html#verbose CedarBackup2.cli.Options._setHelp CedarBackup2.cli.Options-class.html#_setHelp CedarBackup2.cli.Options._getDebug CedarBackup2.cli.Options-class.html#_getDebug CedarBackup2.cli.Options.debug CedarBackup2.cli.Options-class.html#debug CedarBackup2.cli.Options._parseArgumentList CedarBackup2.cli.Options-class.html#_parseArgumentList CedarBackup2.cli.Options.buildArgumentList CedarBackup2.cli.Options-class.html#buildArgumentList CedarBackup2.cli.Options._getManagedOnly CedarBackup2.cli.Options-class.html#_getManagedOnly CedarBackup2.cli.Options.__cmp__ CedarBackup2.cli.Options-class.html#__cmp__ CedarBackup2.cli.Options._getStacktrace CedarBackup2.cli.Options-class.html#_getStacktrace CedarBackup2.cli.Options._setOwner CedarBackup2.cli.Options-class.html#_setOwner CedarBackup2.cli.Options._setMode CedarBackup2.cli.Options-class.html#_setMode CedarBackup2.cli.Options.__init__ CedarBackup2.cli.Options-class.html#__init__ CedarBackup2.cli.Options._getQuiet CedarBackup2.cli.Options-class.html#_getQuiet CedarBackup2.cli.Options.managedOnly CedarBackup2.cli.Options-class.html#managedOnly CedarBackup2.cli.Options._setDebug CedarBackup2.cli.Options-class.html#_setDebug CedarBackup2.cli.Options.config CedarBackup2.cli.Options-class.html#config CedarBackup2.cli.Options.mode CedarBackup2.cli.Options-class.html#mode CedarBackup2.cli.Options._getVersion CedarBackup2.cli.Options-class.html#_getVersion CedarBackup2.cli.Options._getLogfile CedarBackup2.cli.Options-class.html#_getLogfile CedarBackup2.cli.Options.full CedarBackup2.cli.Options-class.html#full CedarBackup2.cli.Options._getConfig CedarBackup2.cli.Options-class.html#_getConfig CedarBackup2.cli.Options._setOutput CedarBackup2.cli.Options-class.html#_setOutput CedarBackup2.cli.Options._setFull CedarBackup2.cli.Options-class.html#_setFull CedarBackup2.cli.Options.version CedarBackup2.cli.Options-class.html#version CedarBackup2.cli.Options._setManagedOnly CedarBackup2.cli.Options-class.html#_setManagedOnly CedarBackup2.cli.Options._setDiagnostics CedarBackup2.cli.Options-class.html#_setDiagnostics CedarBackup2.cli.Options.output CedarBackup2.cli.Options-class.html#output CedarBackup2.cli.Options.validate CedarBackup2.cli.Options-class.html#validate CedarBackup2.cli.Options.logfile CedarBackup2.cli.Options-class.html#logfile CedarBackup2.cli.Options.buildArgumentString CedarBackup2.cli.Options-class.html#buildArgumentString CedarBackup2.cli.Options._getManaged CedarBackup2.cli.Options-class.html#_getManaged CedarBackup2.cli.Options._setManaged CedarBackup2.cli.Options-class.html#_setManaged CedarBackup2.cli.Options._setActions CedarBackup2.cli.Options-class.html#_setActions CedarBackup2.cli.Options._getOutput CedarBackup2.cli.Options-class.html#_getOutput CedarBackup2.cli.Options._getOwner CedarBackup2.cli.Options-class.html#_getOwner CedarBackup2.cli.Options._setLogfile CedarBackup2.cli.Options-class.html#_setLogfile CedarBackup2.cli.Options.quiet CedarBackup2.cli.Options-class.html#quiet CedarBackup2.cli.Options.__repr__ CedarBackup2.cli.Options-class.html#__repr__ CedarBackup2.cli.Options.diagnostics CedarBackup2.cli.Options-class.html#diagnostics CedarBackup2.cli.Options._getDiagnostics CedarBackup2.cli.Options-class.html#_getDiagnostics CedarBackup2.cli.Options._setConfig CedarBackup2.cli.Options-class.html#_setConfig CedarBackup2.cli.Options._setVerbose CedarBackup2.cli.Options-class.html#_setVerbose CedarBackup2.cli.Options._getHelp CedarBackup2.cli.Options-class.html#_getHelp CedarBackup2.cli.Options._getActions CedarBackup2.cli.Options-class.html#_getActions CedarBackup2.cli._ActionItem CedarBackup2.cli._ActionItem-class.html CedarBackup2.cli._ActionItem.executeAction CedarBackup2.cli._ActionItem-class.html#executeAction CedarBackup2.cli._ActionItem.__cmp__ CedarBackup2.cli._ActionItem-class.html#__cmp__ CedarBackup2.cli._ActionItem._executeAction CedarBackup2.cli._ActionItem-class.html#_executeAction CedarBackup2.cli._ActionItem.SORT_ORDER CedarBackup2.cli._ActionItem-class.html#SORT_ORDER CedarBackup2.cli._ActionItem._executeHook CedarBackup2.cli._ActionItem-class.html#_executeHook CedarBackup2.cli._ActionItem.__init__ CedarBackup2.cli._ActionItem-class.html#__init__ CedarBackup2.cli._ActionSet CedarBackup2.cli._ActionSet-class.html CedarBackup2.cli._ActionSet._validateActions CedarBackup2.cli._ActionSet-class.html#_validateActions CedarBackup2.cli._ActionSet._deriveHooks CedarBackup2.cli._ActionSet-class.html#_deriveHooks CedarBackup2.cli._ActionSet.__init__ CedarBackup2.cli._ActionSet-class.html#__init__ CedarBackup2.cli._ActionSet._getCbackCommand CedarBackup2.cli._ActionSet-class.html#_getCbackCommand CedarBackup2.cli._ActionSet.executeActions CedarBackup2.cli._ActionSet-class.html#executeActions CedarBackup2.cli._ActionSet._buildIndexMap CedarBackup2.cli._ActionSet-class.html#_buildIndexMap CedarBackup2.cli._ActionSet._buildHookMaps CedarBackup2.cli._ActionSet-class.html#_buildHookMaps CedarBackup2.cli._ActionSet._buildActionMap CedarBackup2.cli._ActionSet-class.html#_buildActionMap CedarBackup2.cli._ActionSet._buildFunctionMap CedarBackup2.cli._ActionSet-class.html#_buildFunctionMap CedarBackup2.cli._ActionSet._buildPeerMap CedarBackup2.cli._ActionSet-class.html#_buildPeerMap CedarBackup2.cli._ActionSet._getManagedActions CedarBackup2.cli._ActionSet-class.html#_getManagedActions CedarBackup2.cli._ActionSet._getRemoteUser CedarBackup2.cli._ActionSet-class.html#_getRemoteUser CedarBackup2.cli._ActionSet._deriveExtensionNames CedarBackup2.cli._ActionSet-class.html#_deriveExtensionNames CedarBackup2.cli._ActionSet._getRshCommand CedarBackup2.cli._ActionSet-class.html#_getRshCommand CedarBackup2.cli._ActionSet._buildActionSet CedarBackup2.cli._ActionSet-class.html#_buildActionSet CedarBackup2.cli._ManagedActionItem CedarBackup2.cli._ManagedActionItem-class.html CedarBackup2.cli._ManagedActionItem.executeAction CedarBackup2.cli._ManagedActionItem-class.html#executeAction CedarBackup2.cli._ManagedActionItem.__cmp__ CedarBackup2.cli._ManagedActionItem-class.html#__cmp__ CedarBackup2.cli._ManagedActionItem.SORT_ORDER CedarBackup2.cli._ManagedActionItem-class.html#SORT_ORDER CedarBackup2.cli._ManagedActionItem.__init__ CedarBackup2.cli._ManagedActionItem-class.html#__init__ CedarBackup2.config.ActionDependencies CedarBackup2.config.ActionDependencies-class.html CedarBackup2.config.ActionDependencies._setAfterList CedarBackup2.config.ActionDependencies-class.html#_setAfterList CedarBackup2.config.ActionDependencies._getAfterList CedarBackup2.config.ActionDependencies-class.html#_getAfterList CedarBackup2.config.ActionDependencies.__str__ CedarBackup2.config.ActionDependencies-class.html#__str__ CedarBackup2.config.ActionDependencies.beforeList CedarBackup2.config.ActionDependencies-class.html#beforeList CedarBackup2.config.ActionDependencies.__cmp__ CedarBackup2.config.ActionDependencies-class.html#__cmp__ CedarBackup2.config.ActionDependencies.__repr__ CedarBackup2.config.ActionDependencies-class.html#__repr__ CedarBackup2.config.ActionDependencies._getBeforeList CedarBackup2.config.ActionDependencies-class.html#_getBeforeList CedarBackup2.config.ActionDependencies._setBeforeList CedarBackup2.config.ActionDependencies-class.html#_setBeforeList CedarBackup2.config.ActionDependencies.afterList CedarBackup2.config.ActionDependencies-class.html#afterList CedarBackup2.config.ActionDependencies.__init__ CedarBackup2.config.ActionDependencies-class.html#__init__ CedarBackup2.config.ActionHook CedarBackup2.config.ActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.ActionHook.__init__ CedarBackup2.config.ActionHook-class.html#__init__ CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.ActionHook.__repr__ CedarBackup2.config.ActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.BlankBehavior CedarBackup2.config.BlankBehavior-class.html CedarBackup2.config.BlankBehavior._setBlankFactor CedarBackup2.config.BlankBehavior-class.html#_setBlankFactor CedarBackup2.config.BlankBehavior.__str__ CedarBackup2.config.BlankBehavior-class.html#__str__ CedarBackup2.config.BlankBehavior._getBlankFactor CedarBackup2.config.BlankBehavior-class.html#_getBlankFactor CedarBackup2.config.BlankBehavior._setBlankMode CedarBackup2.config.BlankBehavior-class.html#_setBlankMode CedarBackup2.config.BlankBehavior.__cmp__ CedarBackup2.config.BlankBehavior-class.html#__cmp__ CedarBackup2.config.BlankBehavior.blankFactor CedarBackup2.config.BlankBehavior-class.html#blankFactor CedarBackup2.config.BlankBehavior.__repr__ CedarBackup2.config.BlankBehavior-class.html#__repr__ CedarBackup2.config.BlankBehavior.blankMode CedarBackup2.config.BlankBehavior-class.html#blankMode CedarBackup2.config.BlankBehavior._getBlankMode CedarBackup2.config.BlankBehavior-class.html#_getBlankMode CedarBackup2.config.BlankBehavior.__init__ CedarBackup2.config.BlankBehavior-class.html#__init__ CedarBackup2.config.ByteQuantity CedarBackup2.config.ByteQuantity-class.html CedarBackup2.config.ByteQuantity._setQuantity CedarBackup2.config.ByteQuantity-class.html#_setQuantity CedarBackup2.config.ByteQuantity._getBytes CedarBackup2.config.ByteQuantity-class.html#_getBytes CedarBackup2.config.ByteQuantity.__str__ CedarBackup2.config.ByteQuantity-class.html#__str__ CedarBackup2.config.ByteQuantity.__init__ CedarBackup2.config.ByteQuantity-class.html#__init__ CedarBackup2.config.ByteQuantity.__cmp__ CedarBackup2.config.ByteQuantity-class.html#__cmp__ CedarBackup2.config.ByteQuantity._getQuantity CedarBackup2.config.ByteQuantity-class.html#_getQuantity CedarBackup2.config.ByteQuantity.units CedarBackup2.config.ByteQuantity-class.html#units CedarBackup2.config.ByteQuantity._getUnits CedarBackup2.config.ByteQuantity-class.html#_getUnits CedarBackup2.config.ByteQuantity._setUnits CedarBackup2.config.ByteQuantity-class.html#_setUnits CedarBackup2.config.ByteQuantity.bytes CedarBackup2.config.ByteQuantity-class.html#bytes CedarBackup2.config.ByteQuantity.__repr__ CedarBackup2.config.ByteQuantity-class.html#__repr__ CedarBackup2.config.ByteQuantity.quantity CedarBackup2.config.ByteQuantity-class.html#quantity CedarBackup2.config.CollectConfig CedarBackup2.config.CollectConfig-class.html CedarBackup2.config.CollectConfig._getCollectMode CedarBackup2.config.CollectConfig-class.html#_getCollectMode CedarBackup2.config.CollectConfig._getArchiveMode CedarBackup2.config.CollectConfig-class.html#_getArchiveMode CedarBackup2.config.CollectConfig.__str__ CedarBackup2.config.CollectConfig-class.html#__str__ CedarBackup2.config.CollectConfig._setArchiveMode CedarBackup2.config.CollectConfig-class.html#_setArchiveMode CedarBackup2.config.CollectConfig._setExcludePatterns CedarBackup2.config.CollectConfig-class.html#_setExcludePatterns CedarBackup2.config.CollectConfig.collectDirs CedarBackup2.config.CollectConfig-class.html#collectDirs CedarBackup2.config.CollectConfig._getCollectFiles CedarBackup2.config.CollectConfig-class.html#_getCollectFiles CedarBackup2.config.CollectConfig.collectFiles CedarBackup2.config.CollectConfig-class.html#collectFiles CedarBackup2.config.CollectConfig.__init__ CedarBackup2.config.CollectConfig-class.html#__init__ CedarBackup2.config.CollectConfig._setCollectMode CedarBackup2.config.CollectConfig-class.html#_setCollectMode CedarBackup2.config.CollectConfig.archiveMode CedarBackup2.config.CollectConfig-class.html#archiveMode CedarBackup2.config.CollectConfig._getTargetDir CedarBackup2.config.CollectConfig-class.html#_getTargetDir CedarBackup2.config.CollectConfig.__cmp__ CedarBackup2.config.CollectConfig-class.html#__cmp__ CedarBackup2.config.CollectConfig._setIgnoreFile CedarBackup2.config.CollectConfig-class.html#_setIgnoreFile CedarBackup2.config.CollectConfig.absoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#absoluteExcludePaths CedarBackup2.config.CollectConfig._getCollectDirs CedarBackup2.config.CollectConfig-class.html#_getCollectDirs CedarBackup2.config.CollectConfig.ignoreFile CedarBackup2.config.CollectConfig-class.html#ignoreFile CedarBackup2.config.CollectConfig._setCollectFiles CedarBackup2.config.CollectConfig-class.html#_setCollectFiles CedarBackup2.config.CollectConfig._setAbsoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#_setAbsoluteExcludePaths CedarBackup2.config.CollectConfig._setCollectDirs CedarBackup2.config.CollectConfig-class.html#_setCollectDirs CedarBackup2.config.CollectConfig._getIgnoreFile CedarBackup2.config.CollectConfig-class.html#_getIgnoreFile CedarBackup2.config.CollectConfig._getAbsoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#_getAbsoluteExcludePaths CedarBackup2.config.CollectConfig.collectMode CedarBackup2.config.CollectConfig-class.html#collectMode CedarBackup2.config.CollectConfig._getExcludePatterns CedarBackup2.config.CollectConfig-class.html#_getExcludePatterns CedarBackup2.config.CollectConfig.excludePatterns CedarBackup2.config.CollectConfig-class.html#excludePatterns CedarBackup2.config.CollectConfig.targetDir CedarBackup2.config.CollectConfig-class.html#targetDir CedarBackup2.config.CollectConfig.__repr__ CedarBackup2.config.CollectConfig-class.html#__repr__ CedarBackup2.config.CollectConfig._setTargetDir CedarBackup2.config.CollectConfig-class.html#_setTargetDir CedarBackup2.config.CollectDir CedarBackup2.config.CollectDir-class.html CedarBackup2.config.CollectDir._getCollectMode CedarBackup2.config.CollectDir-class.html#_getCollectMode CedarBackup2.config.CollectDir._getArchiveMode CedarBackup2.config.CollectDir-class.html#_getArchiveMode CedarBackup2.config.CollectDir.archiveMode CedarBackup2.config.CollectDir-class.html#archiveMode CedarBackup2.config.CollectDir.__str__ CedarBackup2.config.CollectDir-class.html#__str__ CedarBackup2.config.CollectDir._getAbsolutePath CedarBackup2.config.CollectDir-class.html#_getAbsolutePath CedarBackup2.config.CollectDir._setExcludePatterns CedarBackup2.config.CollectDir-class.html#_setExcludePatterns CedarBackup2.config.CollectDir.__init__ CedarBackup2.config.CollectDir-class.html#__init__ CedarBackup2.config.CollectDir._setCollectMode CedarBackup2.config.CollectDir-class.html#_setCollectMode CedarBackup2.config.CollectDir._setLinkDepth CedarBackup2.config.CollectDir-class.html#_setLinkDepth CedarBackup2.config.CollectDir.recursionLevel CedarBackup2.config.CollectDir-class.html#recursionLevel CedarBackup2.config.CollectDir.absolutePath CedarBackup2.config.CollectDir-class.html#absolutePath CedarBackup2.config.CollectDir.__cmp__ CedarBackup2.config.CollectDir-class.html#__cmp__ CedarBackup2.config.CollectDir._setIgnoreFile CedarBackup2.config.CollectDir-class.html#_setIgnoreFile CedarBackup2.config.CollectDir.absoluteExcludePaths CedarBackup2.config.CollectDir-class.html#absoluteExcludePaths CedarBackup2.config.CollectDir.relativeExcludePaths CedarBackup2.config.CollectDir-class.html#relativeExcludePaths CedarBackup2.config.CollectDir._setArchiveMode CedarBackup2.config.CollectDir-class.html#_setArchiveMode CedarBackup2.config.CollectDir._getDereference CedarBackup2.config.CollectDir-class.html#_getDereference CedarBackup2.config.CollectDir.ignoreFile CedarBackup2.config.CollectDir-class.html#ignoreFile CedarBackup2.config.CollectDir._getLinkDepth CedarBackup2.config.CollectDir-class.html#_getLinkDepth CedarBackup2.config.CollectDir.dereference CedarBackup2.config.CollectDir-class.html#dereference CedarBackup2.config.CollectDir._setAbsoluteExcludePaths CedarBackup2.config.CollectDir-class.html#_setAbsoluteExcludePaths CedarBackup2.config.CollectDir.linkDepth CedarBackup2.config.CollectDir-class.html#linkDepth CedarBackup2.config.CollectDir._getRelativeExcludePaths CedarBackup2.config.CollectDir-class.html#_getRelativeExcludePaths CedarBackup2.config.CollectDir._setRecursionLevel CedarBackup2.config.CollectDir-class.html#_setRecursionLevel CedarBackup2.config.CollectDir._getRecursionLevel CedarBackup2.config.CollectDir-class.html#_getRecursionLevel CedarBackup2.config.CollectDir._setDereference CedarBackup2.config.CollectDir-class.html#_setDereference CedarBackup2.config.CollectDir._getIgnoreFile CedarBackup2.config.CollectDir-class.html#_getIgnoreFile CedarBackup2.config.CollectDir._getAbsoluteExcludePaths CedarBackup2.config.CollectDir-class.html#_getAbsoluteExcludePaths CedarBackup2.config.CollectDir.collectMode CedarBackup2.config.CollectDir-class.html#collectMode CedarBackup2.config.CollectDir._setRelativeExcludePaths CedarBackup2.config.CollectDir-class.html#_setRelativeExcludePaths CedarBackup2.config.CollectDir.excludePatterns CedarBackup2.config.CollectDir-class.html#excludePatterns CedarBackup2.config.CollectDir._setAbsolutePath CedarBackup2.config.CollectDir-class.html#_setAbsolutePath CedarBackup2.config.CollectDir._getExcludePatterns CedarBackup2.config.CollectDir-class.html#_getExcludePatterns CedarBackup2.config.CollectDir.__repr__ CedarBackup2.config.CollectDir-class.html#__repr__ CedarBackup2.config.CollectFile CedarBackup2.config.CollectFile-class.html CedarBackup2.config.CollectFile._getCollectMode CedarBackup2.config.CollectFile-class.html#_getCollectMode CedarBackup2.config.CollectFile._getArchiveMode CedarBackup2.config.CollectFile-class.html#_getArchiveMode CedarBackup2.config.CollectFile.__str__ CedarBackup2.config.CollectFile-class.html#__str__ CedarBackup2.config.CollectFile._setArchiveMode CedarBackup2.config.CollectFile-class.html#_setArchiveMode CedarBackup2.config.CollectFile.__init__ CedarBackup2.config.CollectFile-class.html#__init__ CedarBackup2.config.CollectFile._setCollectMode CedarBackup2.config.CollectFile-class.html#_setCollectMode CedarBackup2.config.CollectFile.archiveMode CedarBackup2.config.CollectFile-class.html#archiveMode CedarBackup2.config.CollectFile.absolutePath CedarBackup2.config.CollectFile-class.html#absolutePath CedarBackup2.config.CollectFile.__cmp__ CedarBackup2.config.CollectFile-class.html#__cmp__ CedarBackup2.config.CollectFile._getAbsolutePath CedarBackup2.config.CollectFile-class.html#_getAbsolutePath CedarBackup2.config.CollectFile.collectMode CedarBackup2.config.CollectFile-class.html#collectMode CedarBackup2.config.CollectFile._setAbsolutePath CedarBackup2.config.CollectFile-class.html#_setAbsolutePath CedarBackup2.config.CollectFile.__repr__ CedarBackup2.config.CollectFile-class.html#__repr__ CedarBackup2.config.CommandOverride CedarBackup2.config.CommandOverride-class.html CedarBackup2.config.CommandOverride.__str__ CedarBackup2.config.CommandOverride-class.html#__str__ CedarBackup2.config.CommandOverride._getAbsolutePath CedarBackup2.config.CommandOverride-class.html#_getAbsolutePath CedarBackup2.config.CommandOverride.absolutePath CedarBackup2.config.CommandOverride-class.html#absolutePath CedarBackup2.config.CommandOverride.__cmp__ CedarBackup2.config.CommandOverride-class.html#__cmp__ CedarBackup2.config.CommandOverride._setCommand CedarBackup2.config.CommandOverride-class.html#_setCommand CedarBackup2.config.CommandOverride.command CedarBackup2.config.CommandOverride-class.html#command CedarBackup2.config.CommandOverride.__repr__ CedarBackup2.config.CommandOverride-class.html#__repr__ CedarBackup2.config.CommandOverride._setAbsolutePath CedarBackup2.config.CommandOverride-class.html#_setAbsolutePath CedarBackup2.config.CommandOverride.__init__ CedarBackup2.config.CommandOverride-class.html#__init__ CedarBackup2.config.CommandOverride._getCommand CedarBackup2.config.CommandOverride-class.html#_getCommand CedarBackup2.config.Config CedarBackup2.config.Config-class.html CedarBackup2.config.Config._addCollect CedarBackup2.config.Config-class.html#_addCollect CedarBackup2.config.Config.extractXml CedarBackup2.config.Config-class.html#extractXml CedarBackup2.config.Config._addStage CedarBackup2.config.Config-class.html#_addStage CedarBackup2.config.Config._getReference CedarBackup2.config.Config-class.html#_getReference CedarBackup2.config.Config.__str__ CedarBackup2.config.Config-class.html#__str__ CedarBackup2.config.Config._validateStage CedarBackup2.config.Config-class.html#_validateStage CedarBackup2.config.Config._addOptions CedarBackup2.config.Config-class.html#_addOptions CedarBackup2.config.Config._validatePurge CedarBackup2.config.Config-class.html#_validatePurge CedarBackup2.config.Config._parseXmlData CedarBackup2.config.Config-class.html#_parseXmlData CedarBackup2.config.Config._parseOverrides CedarBackup2.config.Config-class.html#_parseOverrides CedarBackup2.config.Config._setStore CedarBackup2.config.Config-class.html#_setStore CedarBackup2.config.Config._addReference CedarBackup2.config.Config-class.html#_addReference CedarBackup2.config.Config.__cmp__ CedarBackup2.config.Config-class.html#__cmp__ CedarBackup2.config.Config._validateStore CedarBackup2.config.Config-class.html#_validateStore CedarBackup2.config.Config._setPurge CedarBackup2.config.Config-class.html#_setPurge CedarBackup2.config.Config._validateExtensions CedarBackup2.config.Config-class.html#_validateExtensions CedarBackup2.config.Config._addExtendedAction CedarBackup2.config.Config-class.html#_addExtendedAction CedarBackup2.config.Config.collect CedarBackup2.config.Config-class.html#collect CedarBackup2.config.Config._validateContents CedarBackup2.config.Config-class.html#_validateContents CedarBackup2.config.Config.reference CedarBackup2.config.Config-class.html#reference CedarBackup2.config.Config._validateReference CedarBackup2.config.Config-class.html#_validateReference CedarBackup2.config.Config._addPeers CedarBackup2.config.Config-class.html#_addPeers CedarBackup2.config.Config._getOptions CedarBackup2.config.Config-class.html#_getOptions CedarBackup2.config.Config._validateOptions CedarBackup2.config.Config-class.html#_validateOptions CedarBackup2.config.Config._parseBlankBehavior CedarBackup2.config.Config-class.html#_parseBlankBehavior CedarBackup2.config.Config._getStage CedarBackup2.config.Config-class.html#_getStage CedarBackup2.config.Config._setCollect CedarBackup2.config.Config-class.html#_setCollect CedarBackup2.config.Config._parseReference CedarBackup2.config.Config-class.html#_parseReference CedarBackup2.config.Config._addLocalPeer CedarBackup2.config.Config-class.html#_addLocalPeer CedarBackup2.config.Config._parseExtensions CedarBackup2.config.Config-class.html#_parseExtensions CedarBackup2.config.Config._validatePeers CedarBackup2.config.Config-class.html#_validatePeers CedarBackup2.config.Config.stage CedarBackup2.config.Config-class.html#stage CedarBackup2.config.Config._getExtensions CedarBackup2.config.Config-class.html#_getExtensions CedarBackup2.config.Config._parseExclusions CedarBackup2.config.Config-class.html#_parseExclusions CedarBackup2.config.Config._parseStage CedarBackup2.config.Config-class.html#_parseStage CedarBackup2.config.Config._parseCollectDirs CedarBackup2.config.Config-class.html#_parseCollectDirs CedarBackup2.config.Config.extensions CedarBackup2.config.Config-class.html#extensions CedarBackup2.config.Config._addBlankBehavior CedarBackup2.config.Config-class.html#_addBlankBehavior CedarBackup2.config.Config._parseDependencies CedarBackup2.config.Config-class.html#_parseDependencies CedarBackup2.config.Config.options CedarBackup2.config.Config-class.html#options CedarBackup2.config.Config.__repr__ CedarBackup2.config.Config-class.html#__repr__ CedarBackup2.config.Config._parsePeers CedarBackup2.config.Config-class.html#_parsePeers CedarBackup2.config.Config._addCollectFile CedarBackup2.config.Config-class.html#_addCollectFile CedarBackup2.config.Config._parsePeerList CedarBackup2.config.Config-class.html#_parsePeerList CedarBackup2.config.Config._extractXml CedarBackup2.config.Config-class.html#_extractXml CedarBackup2.config.Config._validatePeerList CedarBackup2.config.Config-class.html#_validatePeerList CedarBackup2.config.Config._buildCommaSeparatedString CedarBackup2.config.Config-class.html#_buildCommaSeparatedString CedarBackup2.config.Config._addHook CedarBackup2.config.Config-class.html#_addHook CedarBackup2.config.Config._getCollect CedarBackup2.config.Config-class.html#_getCollect CedarBackup2.config.Config._parseHooks CedarBackup2.config.Config-class.html#_parseHooks CedarBackup2.config.Config._parseStore CedarBackup2.config.Config-class.html#_parseStore CedarBackup2.config.Config._setPeers CedarBackup2.config.Config-class.html#_setPeers CedarBackup2.config.Config._parseOptions CedarBackup2.config.Config-class.html#_parseOptions CedarBackup2.config.Config._getPeers CedarBackup2.config.Config-class.html#_getPeers CedarBackup2.config.Config._addStore CedarBackup2.config.Config-class.html#_addStore CedarBackup2.config.Config._addExtensions CedarBackup2.config.Config-class.html#_addExtensions CedarBackup2.config.Config.purge CedarBackup2.config.Config-class.html#purge CedarBackup2.config.Config.store CedarBackup2.config.Config-class.html#store CedarBackup2.config.Config._addOverride CedarBackup2.config.Config-class.html#_addOverride CedarBackup2.config.Config._addPurgeDir CedarBackup2.config.Config-class.html#_addPurgeDir CedarBackup2.config.Config._addDependencies CedarBackup2.config.Config-class.html#_addDependencies CedarBackup2.config.Config._addCollectDir CedarBackup2.config.Config-class.html#_addCollectDir CedarBackup2.config.Config._parsePurge CedarBackup2.config.Config-class.html#_parsePurge CedarBackup2.config.Config._addRemotePeer CedarBackup2.config.Config-class.html#_addRemotePeer CedarBackup2.config.Config.__init__ CedarBackup2.config.Config-class.html#__init__ CedarBackup2.config.Config._addPurge CedarBackup2.config.Config-class.html#_addPurge CedarBackup2.config.Config._setExtensions CedarBackup2.config.Config-class.html#_setExtensions CedarBackup2.config.Config._parsePurgeDirs CedarBackup2.config.Config-class.html#_parsePurgeDirs CedarBackup2.config.Config._parseCollect CedarBackup2.config.Config-class.html#_parseCollect CedarBackup2.config.Config._getStore CedarBackup2.config.Config-class.html#_getStore CedarBackup2.config.Config._setStage CedarBackup2.config.Config-class.html#_setStage CedarBackup2.config.Config._validateCollect CedarBackup2.config.Config-class.html#_validateCollect CedarBackup2.config.Config._getPurge CedarBackup2.config.Config-class.html#_getPurge CedarBackup2.config.Config.validate CedarBackup2.config.Config-class.html#validate CedarBackup2.config.Config._parseExtendedActions CedarBackup2.config.Config-class.html#_parseExtendedActions CedarBackup2.config.Config.peers CedarBackup2.config.Config-class.html#peers CedarBackup2.config.Config._parseCollectFiles CedarBackup2.config.Config-class.html#_parseCollectFiles CedarBackup2.config.Config._setOptions CedarBackup2.config.Config-class.html#_setOptions CedarBackup2.config.Config._setReference CedarBackup2.config.Config-class.html#_setReference CedarBackup2.config.ExtendedAction CedarBackup2.config.ExtendedAction-class.html CedarBackup2.config.ExtendedAction._getModule CedarBackup2.config.ExtendedAction-class.html#_getModule CedarBackup2.config.ExtendedAction.__str__ CedarBackup2.config.ExtendedAction-class.html#__str__ CedarBackup2.config.ExtendedAction.module CedarBackup2.config.ExtendedAction-class.html#module CedarBackup2.config.ExtendedAction._getName CedarBackup2.config.ExtendedAction-class.html#_getName CedarBackup2.config.ExtendedAction.__init__ CedarBackup2.config.ExtendedAction-class.html#__init__ CedarBackup2.config.ExtendedAction.index CedarBackup2.config.ExtendedAction-class.html#index CedarBackup2.config.ExtendedAction.__cmp__ CedarBackup2.config.ExtendedAction-class.html#__cmp__ CedarBackup2.config.ExtendedAction._getDependencies CedarBackup2.config.ExtendedAction-class.html#_getDependencies CedarBackup2.config.ExtendedAction.function CedarBackup2.config.ExtendedAction-class.html#function CedarBackup2.config.ExtendedAction._setIndex CedarBackup2.config.ExtendedAction-class.html#_setIndex CedarBackup2.config.ExtendedAction._getFunction CedarBackup2.config.ExtendedAction-class.html#_getFunction CedarBackup2.config.ExtendedAction._setDependencies CedarBackup2.config.ExtendedAction-class.html#_setDependencies CedarBackup2.config.ExtendedAction.dependencies CedarBackup2.config.ExtendedAction-class.html#dependencies CedarBackup2.config.ExtendedAction._setModule CedarBackup2.config.ExtendedAction-class.html#_setModule CedarBackup2.config.ExtendedAction._getIndex CedarBackup2.config.ExtendedAction-class.html#_getIndex CedarBackup2.config.ExtendedAction._setFunction CedarBackup2.config.ExtendedAction-class.html#_setFunction CedarBackup2.config.ExtendedAction.name CedarBackup2.config.ExtendedAction-class.html#name CedarBackup2.config.ExtendedAction.__repr__ CedarBackup2.config.ExtendedAction-class.html#__repr__ CedarBackup2.config.ExtendedAction._setName CedarBackup2.config.ExtendedAction-class.html#_setName CedarBackup2.config.ExtensionsConfig CedarBackup2.config.ExtensionsConfig-class.html CedarBackup2.config.ExtensionsConfig.orderMode CedarBackup2.config.ExtensionsConfig-class.html#orderMode CedarBackup2.config.ExtensionsConfig.__str__ CedarBackup2.config.ExtensionsConfig-class.html#__str__ CedarBackup2.config.ExtensionsConfig.actions CedarBackup2.config.ExtensionsConfig-class.html#actions CedarBackup2.config.ExtensionsConfig.__cmp__ CedarBackup2.config.ExtensionsConfig-class.html#__cmp__ CedarBackup2.config.ExtensionsConfig._setActions CedarBackup2.config.ExtensionsConfig-class.html#_setActions CedarBackup2.config.ExtensionsConfig._setOrderMode CedarBackup2.config.ExtensionsConfig-class.html#_setOrderMode CedarBackup2.config.ExtensionsConfig.__repr__ CedarBackup2.config.ExtensionsConfig-class.html#__repr__ CedarBackup2.config.ExtensionsConfig._getOrderMode CedarBackup2.config.ExtensionsConfig-class.html#_getOrderMode CedarBackup2.config.ExtensionsConfig._getActions CedarBackup2.config.ExtensionsConfig-class.html#_getActions CedarBackup2.config.ExtensionsConfig.__init__ CedarBackup2.config.ExtensionsConfig-class.html#__init__ CedarBackup2.config.LocalPeer CedarBackup2.config.LocalPeer-class.html CedarBackup2.config.LocalPeer.__str__ CedarBackup2.config.LocalPeer-class.html#__str__ CedarBackup2.config.LocalPeer._setIgnoreFailureMode CedarBackup2.config.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup2.config.LocalPeer._getName CedarBackup2.config.LocalPeer-class.html#_getName CedarBackup2.config.LocalPeer.__init__ CedarBackup2.config.LocalPeer-class.html#__init__ CedarBackup2.config.LocalPeer.__cmp__ CedarBackup2.config.LocalPeer-class.html#__cmp__ CedarBackup2.config.LocalPeer._getIgnoreFailureMode CedarBackup2.config.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup2.config.LocalPeer.ignoreFailureMode CedarBackup2.config.LocalPeer-class.html#ignoreFailureMode CedarBackup2.config.LocalPeer._getCollectDir CedarBackup2.config.LocalPeer-class.html#_getCollectDir CedarBackup2.config.LocalPeer.name CedarBackup2.config.LocalPeer-class.html#name CedarBackup2.config.LocalPeer.collectDir CedarBackup2.config.LocalPeer-class.html#collectDir CedarBackup2.config.LocalPeer._setCollectDir CedarBackup2.config.LocalPeer-class.html#_setCollectDir CedarBackup2.config.LocalPeer.__repr__ CedarBackup2.config.LocalPeer-class.html#__repr__ CedarBackup2.config.LocalPeer._setName CedarBackup2.config.LocalPeer-class.html#_setName CedarBackup2.config.OptionsConfig CedarBackup2.config.OptionsConfig-class.html CedarBackup2.config.OptionsConfig._getRcpCommand CedarBackup2.config.OptionsConfig-class.html#_getRcpCommand CedarBackup2.config.OptionsConfig._getWorkingDir CedarBackup2.config.OptionsConfig-class.html#_getWorkingDir CedarBackup2.config.OptionsConfig._setBackupUser CedarBackup2.config.OptionsConfig-class.html#_setBackupUser CedarBackup2.config.OptionsConfig.__str__ CedarBackup2.config.OptionsConfig-class.html#__str__ CedarBackup2.config.OptionsConfig.backupUser CedarBackup2.config.OptionsConfig-class.html#backupUser CedarBackup2.config.OptionsConfig._getStartingDay CedarBackup2.config.OptionsConfig-class.html#_getStartingDay CedarBackup2.config.OptionsConfig.managedActions CedarBackup2.config.OptionsConfig-class.html#managedActions CedarBackup2.config.OptionsConfig.replaceOverride CedarBackup2.config.OptionsConfig-class.html#replaceOverride CedarBackup2.config.OptionsConfig._getBackupUser CedarBackup2.config.OptionsConfig-class.html#_getBackupUser CedarBackup2.config.OptionsConfig.__init__ CedarBackup2.config.OptionsConfig-class.html#__init__ CedarBackup2.config.OptionsConfig._setBackupGroup CedarBackup2.config.OptionsConfig-class.html#_setBackupGroup CedarBackup2.config.OptionsConfig._setCbackCommand CedarBackup2.config.OptionsConfig-class.html#_setCbackCommand CedarBackup2.config.OptionsConfig._getCbackCommand CedarBackup2.config.OptionsConfig-class.html#_getCbackCommand CedarBackup2.config.OptionsConfig.workingDir CedarBackup2.config.OptionsConfig-class.html#workingDir CedarBackup2.config.OptionsConfig.__cmp__ CedarBackup2.config.OptionsConfig-class.html#__cmp__ CedarBackup2.config.OptionsConfig.hooks CedarBackup2.config.OptionsConfig-class.html#hooks CedarBackup2.config.OptionsConfig.backupGroup CedarBackup2.config.OptionsConfig-class.html#backupGroup CedarBackup2.config.OptionsConfig.startingDay CedarBackup2.config.OptionsConfig-class.html#startingDay CedarBackup2.config.OptionsConfig._getHooks CedarBackup2.config.OptionsConfig-class.html#_getHooks CedarBackup2.config.OptionsConfig._setWorkingDir CedarBackup2.config.OptionsConfig-class.html#_setWorkingDir CedarBackup2.config.OptionsConfig._getBackupGroup CedarBackup2.config.OptionsConfig-class.html#_getBackupGroup CedarBackup2.config.OptionsConfig.rshCommand CedarBackup2.config.OptionsConfig-class.html#rshCommand CedarBackup2.config.OptionsConfig.addOverride CedarBackup2.config.OptionsConfig-class.html#addOverride CedarBackup2.config.OptionsConfig._setManagedActions CedarBackup2.config.OptionsConfig-class.html#_setManagedActions CedarBackup2.config.OptionsConfig.rcpCommand CedarBackup2.config.OptionsConfig-class.html#rcpCommand CedarBackup2.config.OptionsConfig._setRcpCommand CedarBackup2.config.OptionsConfig-class.html#_setRcpCommand CedarBackup2.config.OptionsConfig.cbackCommand CedarBackup2.config.OptionsConfig-class.html#cbackCommand CedarBackup2.config.OptionsConfig.overrides CedarBackup2.config.OptionsConfig-class.html#overrides CedarBackup2.config.OptionsConfig._setOverrides CedarBackup2.config.OptionsConfig-class.html#_setOverrides CedarBackup2.config.OptionsConfig._setHooks CedarBackup2.config.OptionsConfig-class.html#_setHooks CedarBackup2.config.OptionsConfig._getManagedActions CedarBackup2.config.OptionsConfig-class.html#_getManagedActions CedarBackup2.config.OptionsConfig._getOverrides CedarBackup2.config.OptionsConfig-class.html#_getOverrides CedarBackup2.config.OptionsConfig.__repr__ CedarBackup2.config.OptionsConfig-class.html#__repr__ CedarBackup2.config.OptionsConfig._getRshCommand CedarBackup2.config.OptionsConfig-class.html#_getRshCommand CedarBackup2.config.OptionsConfig._setRshCommand CedarBackup2.config.OptionsConfig-class.html#_setRshCommand CedarBackup2.config.OptionsConfig._setStartingDay CedarBackup2.config.OptionsConfig-class.html#_setStartingDay CedarBackup2.config.PeersConfig CedarBackup2.config.PeersConfig-class.html CedarBackup2.config.PeersConfig.__str__ CedarBackup2.config.PeersConfig-class.html#__str__ CedarBackup2.config.PeersConfig._getRemotePeers CedarBackup2.config.PeersConfig-class.html#_getRemotePeers CedarBackup2.config.PeersConfig.localPeers CedarBackup2.config.PeersConfig-class.html#localPeers CedarBackup2.config.PeersConfig.__init__ CedarBackup2.config.PeersConfig-class.html#__init__ CedarBackup2.config.PeersConfig.hasPeers CedarBackup2.config.PeersConfig-class.html#hasPeers CedarBackup2.config.PeersConfig._setRemotePeers CedarBackup2.config.PeersConfig-class.html#_setRemotePeers CedarBackup2.config.PeersConfig.__cmp__ CedarBackup2.config.PeersConfig-class.html#__cmp__ CedarBackup2.config.PeersConfig._getLocalPeers CedarBackup2.config.PeersConfig-class.html#_getLocalPeers CedarBackup2.config.PeersConfig._setLocalPeers CedarBackup2.config.PeersConfig-class.html#_setLocalPeers CedarBackup2.config.PeersConfig.remotePeers CedarBackup2.config.PeersConfig-class.html#remotePeers CedarBackup2.config.PeersConfig.__repr__ CedarBackup2.config.PeersConfig-class.html#__repr__ CedarBackup2.config.PostActionHook CedarBackup2.config.PostActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.PostActionHook.__init__ CedarBackup2.config.PostActionHook-class.html#__init__ CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.PostActionHook.__repr__ CedarBackup2.config.PostActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.PreActionHook CedarBackup2.config.PreActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.PreActionHook.__init__ CedarBackup2.config.PreActionHook-class.html#__init__ CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.PreActionHook.__repr__ CedarBackup2.config.PreActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.PurgeConfig CedarBackup2.config.PurgeConfig-class.html CedarBackup2.config.PurgeConfig.__str__ CedarBackup2.config.PurgeConfig-class.html#__str__ CedarBackup2.config.PurgeConfig.__cmp__ CedarBackup2.config.PurgeConfig-class.html#__cmp__ CedarBackup2.config.PurgeConfig._setPurgeDirs CedarBackup2.config.PurgeConfig-class.html#_setPurgeDirs CedarBackup2.config.PurgeConfig.purgeDirs CedarBackup2.config.PurgeConfig-class.html#purgeDirs CedarBackup2.config.PurgeConfig.__repr__ CedarBackup2.config.PurgeConfig-class.html#__repr__ CedarBackup2.config.PurgeConfig.__init__ CedarBackup2.config.PurgeConfig-class.html#__init__ CedarBackup2.config.PurgeConfig._getPurgeDirs CedarBackup2.config.PurgeConfig-class.html#_getPurgeDirs CedarBackup2.config.PurgeDir CedarBackup2.config.PurgeDir-class.html CedarBackup2.config.PurgeDir._getRetainDays CedarBackup2.config.PurgeDir-class.html#_getRetainDays CedarBackup2.config.PurgeDir.__str__ CedarBackup2.config.PurgeDir-class.html#__str__ CedarBackup2.config.PurgeDir._getAbsolutePath CedarBackup2.config.PurgeDir-class.html#_getAbsolutePath CedarBackup2.config.PurgeDir.retainDays CedarBackup2.config.PurgeDir-class.html#retainDays CedarBackup2.config.PurgeDir._setRetainDays CedarBackup2.config.PurgeDir-class.html#_setRetainDays CedarBackup2.config.PurgeDir.absolutePath CedarBackup2.config.PurgeDir-class.html#absolutePath CedarBackup2.config.PurgeDir.__cmp__ CedarBackup2.config.PurgeDir-class.html#__cmp__ CedarBackup2.config.PurgeDir.__repr__ CedarBackup2.config.PurgeDir-class.html#__repr__ CedarBackup2.config.PurgeDir._setAbsolutePath CedarBackup2.config.PurgeDir-class.html#_setAbsolutePath CedarBackup2.config.PurgeDir.__init__ CedarBackup2.config.PurgeDir-class.html#__init__ CedarBackup2.config.ReferenceConfig CedarBackup2.config.ReferenceConfig-class.html CedarBackup2.config.ReferenceConfig._setAuthor CedarBackup2.config.ReferenceConfig-class.html#_setAuthor CedarBackup2.config.ReferenceConfig.__str__ CedarBackup2.config.ReferenceConfig-class.html#__str__ CedarBackup2.config.ReferenceConfig.__init__ CedarBackup2.config.ReferenceConfig-class.html#__init__ CedarBackup2.config.ReferenceConfig.generator CedarBackup2.config.ReferenceConfig-class.html#generator CedarBackup2.config.ReferenceConfig.author CedarBackup2.config.ReferenceConfig-class.html#author CedarBackup2.config.ReferenceConfig._getGenerator CedarBackup2.config.ReferenceConfig-class.html#_getGenerator CedarBackup2.config.ReferenceConfig.__cmp__ CedarBackup2.config.ReferenceConfig-class.html#__cmp__ CedarBackup2.config.ReferenceConfig.revision CedarBackup2.config.ReferenceConfig-class.html#revision CedarBackup2.config.ReferenceConfig.description CedarBackup2.config.ReferenceConfig-class.html#description CedarBackup2.config.ReferenceConfig._setGenerator CedarBackup2.config.ReferenceConfig-class.html#_setGenerator CedarBackup2.config.ReferenceConfig._setDescription CedarBackup2.config.ReferenceConfig-class.html#_setDescription CedarBackup2.config.ReferenceConfig._setRevision CedarBackup2.config.ReferenceConfig-class.html#_setRevision CedarBackup2.config.ReferenceConfig._getRevision CedarBackup2.config.ReferenceConfig-class.html#_getRevision CedarBackup2.config.ReferenceConfig._getAuthor CedarBackup2.config.ReferenceConfig-class.html#_getAuthor CedarBackup2.config.ReferenceConfig._getDescription CedarBackup2.config.ReferenceConfig-class.html#_getDescription CedarBackup2.config.ReferenceConfig.__repr__ CedarBackup2.config.ReferenceConfig-class.html#__repr__ CedarBackup2.config.RemotePeer CedarBackup2.config.RemotePeer-class.html CedarBackup2.config.RemotePeer._getRcpCommand CedarBackup2.config.RemotePeer-class.html#_getRcpCommand CedarBackup2.config.RemotePeer.managed CedarBackup2.config.RemotePeer-class.html#managed CedarBackup2.config.RemotePeer.__str__ CedarBackup2.config.RemotePeer-class.html#__str__ CedarBackup2.config.RemotePeer.cbackCommand CedarBackup2.config.RemotePeer-class.html#cbackCommand CedarBackup2.config.RemotePeer._setIgnoreFailureMode CedarBackup2.config.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup2.config.RemotePeer.managedActions CedarBackup2.config.RemotePeer-class.html#managedActions CedarBackup2.config.RemotePeer._getName CedarBackup2.config.RemotePeer-class.html#_getName CedarBackup2.config.RemotePeer.__init__ CedarBackup2.config.RemotePeer-class.html#__init__ CedarBackup2.config.RemotePeer._setCbackCommand CedarBackup2.config.RemotePeer-class.html#_setCbackCommand CedarBackup2.config.RemotePeer._getCbackCommand CedarBackup2.config.RemotePeer-class.html#_getCbackCommand CedarBackup2.config.RemotePeer.remoteUser CedarBackup2.config.RemotePeer-class.html#remoteUser CedarBackup2.config.RemotePeer.__cmp__ CedarBackup2.config.RemotePeer-class.html#__cmp__ CedarBackup2.config.RemotePeer._getIgnoreFailureMode CedarBackup2.config.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup2.config.RemotePeer.name CedarBackup2.config.RemotePeer-class.html#name CedarBackup2.config.RemotePeer.ignoreFailureMode CedarBackup2.config.RemotePeer-class.html#ignoreFailureMode CedarBackup2.config.RemotePeer._setManaged CedarBackup2.config.RemotePeer-class.html#_setManaged CedarBackup2.config.RemotePeer._setRemoteUser CedarBackup2.config.RemotePeer-class.html#_setRemoteUser CedarBackup2.config.RemotePeer.rshCommand CedarBackup2.config.RemotePeer-class.html#rshCommand CedarBackup2.config.RemotePeer._getManaged CedarBackup2.config.RemotePeer-class.html#_getManaged CedarBackup2.config.RemotePeer._getCollectDir CedarBackup2.config.RemotePeer-class.html#_getCollectDir CedarBackup2.config.RemotePeer._setManagedActions CedarBackup2.config.RemotePeer-class.html#_setManagedActions CedarBackup2.config.RemotePeer.rcpCommand CedarBackup2.config.RemotePeer-class.html#rcpCommand CedarBackup2.config.RemotePeer._setRcpCommand CedarBackup2.config.RemotePeer-class.html#_setRcpCommand CedarBackup2.config.RemotePeer.collectDir CedarBackup2.config.RemotePeer-class.html#collectDir CedarBackup2.config.RemotePeer._setCollectDir CedarBackup2.config.RemotePeer-class.html#_setCollectDir CedarBackup2.config.RemotePeer._getManagedActions CedarBackup2.config.RemotePeer-class.html#_getManagedActions CedarBackup2.config.RemotePeer._getRemoteUser CedarBackup2.config.RemotePeer-class.html#_getRemoteUser CedarBackup2.config.RemotePeer.__repr__ CedarBackup2.config.RemotePeer-class.html#__repr__ CedarBackup2.config.RemotePeer._setName CedarBackup2.config.RemotePeer-class.html#_setName CedarBackup2.config.RemotePeer._getRshCommand CedarBackup2.config.RemotePeer-class.html#_getRshCommand CedarBackup2.config.RemotePeer._setRshCommand CedarBackup2.config.RemotePeer-class.html#_setRshCommand CedarBackup2.config.StageConfig CedarBackup2.config.StageConfig-class.html CedarBackup2.config.StageConfig.__str__ CedarBackup2.config.StageConfig-class.html#__str__ CedarBackup2.config.StageConfig._getRemotePeers CedarBackup2.config.StageConfig-class.html#_getRemotePeers CedarBackup2.config.StageConfig.localPeers CedarBackup2.config.StageConfig-class.html#localPeers CedarBackup2.config.StageConfig.__init__ CedarBackup2.config.StageConfig-class.html#__init__ CedarBackup2.config.StageConfig.hasPeers CedarBackup2.config.StageConfig-class.html#hasPeers CedarBackup2.config.StageConfig._setRemotePeers CedarBackup2.config.StageConfig-class.html#_setRemotePeers CedarBackup2.config.StageConfig._getTargetDir CedarBackup2.config.StageConfig-class.html#_getTargetDir CedarBackup2.config.StageConfig.__cmp__ CedarBackup2.config.StageConfig-class.html#__cmp__ CedarBackup2.config.StageConfig._getLocalPeers CedarBackup2.config.StageConfig-class.html#_getLocalPeers CedarBackup2.config.StageConfig._setLocalPeers CedarBackup2.config.StageConfig-class.html#_setLocalPeers CedarBackup2.config.StageConfig.remotePeers CedarBackup2.config.StageConfig-class.html#remotePeers CedarBackup2.config.StageConfig.targetDir CedarBackup2.config.StageConfig-class.html#targetDir CedarBackup2.config.StageConfig.__repr__ CedarBackup2.config.StageConfig-class.html#__repr__ CedarBackup2.config.StageConfig._setTargetDir CedarBackup2.config.StageConfig-class.html#_setTargetDir CedarBackup2.config.StoreConfig CedarBackup2.config.StoreConfig-class.html CedarBackup2.config.StoreConfig.__str__ CedarBackup2.config.StoreConfig-class.html#__str__ CedarBackup2.config.StoreConfig._setEjectDelay CedarBackup2.config.StoreConfig-class.html#_setEjectDelay CedarBackup2.config.StoreConfig._getDevicePath CedarBackup2.config.StoreConfig-class.html#_getDevicePath CedarBackup2.config.StoreConfig._setDeviceScsiId CedarBackup2.config.StoreConfig-class.html#_setDeviceScsiId CedarBackup2.config.StoreConfig._setDevicePath CedarBackup2.config.StoreConfig-class.html#_setDevicePath CedarBackup2.config.StoreConfig._getDeviceScsiId CedarBackup2.config.StoreConfig-class.html#_getDeviceScsiId CedarBackup2.config.StoreConfig._setSourceDir CedarBackup2.config.StoreConfig-class.html#_setSourceDir CedarBackup2.config.StoreConfig.__init__ CedarBackup2.config.StoreConfig-class.html#__init__ CedarBackup2.config.StoreConfig.refreshMediaDelay CedarBackup2.config.StoreConfig-class.html#refreshMediaDelay CedarBackup2.config.StoreConfig.sourceDir CedarBackup2.config.StoreConfig-class.html#sourceDir CedarBackup2.config.StoreConfig._getCheckMedia CedarBackup2.config.StoreConfig-class.html#_getCheckMedia CedarBackup2.config.StoreConfig.mediaType CedarBackup2.config.StoreConfig-class.html#mediaType CedarBackup2.config.StoreConfig.__cmp__ CedarBackup2.config.StoreConfig-class.html#__cmp__ CedarBackup2.config.StoreConfig._setNoEject CedarBackup2.config.StoreConfig-class.html#_setNoEject CedarBackup2.config.StoreConfig.warnMidnite CedarBackup2.config.StoreConfig-class.html#warnMidnite CedarBackup2.config.StoreConfig._setWarnMidnite CedarBackup2.config.StoreConfig-class.html#_setWarnMidnite CedarBackup2.config.StoreConfig.deviceType CedarBackup2.config.StoreConfig-class.html#deviceType CedarBackup2.config.StoreConfig.driveSpeed CedarBackup2.config.StoreConfig-class.html#driveSpeed CedarBackup2.config.StoreConfig._getMediaType CedarBackup2.config.StoreConfig-class.html#_getMediaType CedarBackup2.config.StoreConfig._getDeviceType CedarBackup2.config.StoreConfig-class.html#_getDeviceType CedarBackup2.config.StoreConfig.noEject CedarBackup2.config.StoreConfig-class.html#noEject CedarBackup2.config.StoreConfig._getBlankBehavior CedarBackup2.config.StoreConfig-class.html#_getBlankBehavior CedarBackup2.config.StoreConfig._getWarnMidnite CedarBackup2.config.StoreConfig-class.html#_getWarnMidnite CedarBackup2.config.StoreConfig._setMediaType CedarBackup2.config.StoreConfig-class.html#_setMediaType CedarBackup2.config.StoreConfig.deviceScsiId CedarBackup2.config.StoreConfig-class.html#deviceScsiId CedarBackup2.config.StoreConfig.blankBehavior CedarBackup2.config.StoreConfig-class.html#blankBehavior CedarBackup2.config.StoreConfig._getDriveSpeed CedarBackup2.config.StoreConfig-class.html#_getDriveSpeed CedarBackup2.config.StoreConfig._setCheckData CedarBackup2.config.StoreConfig-class.html#_setCheckData CedarBackup2.config.StoreConfig._setRefreshMediaDelay CedarBackup2.config.StoreConfig-class.html#_setRefreshMediaDelay CedarBackup2.config.StoreConfig.devicePath CedarBackup2.config.StoreConfig-class.html#devicePath CedarBackup2.config.StoreConfig.checkData CedarBackup2.config.StoreConfig-class.html#checkData CedarBackup2.config.StoreConfig._setDriveSpeed CedarBackup2.config.StoreConfig-class.html#_setDriveSpeed CedarBackup2.config.StoreConfig._setDeviceType CedarBackup2.config.StoreConfig-class.html#_setDeviceType CedarBackup2.config.StoreConfig.checkMedia CedarBackup2.config.StoreConfig-class.html#checkMedia CedarBackup2.config.StoreConfig._getEjectDelay CedarBackup2.config.StoreConfig-class.html#_getEjectDelay CedarBackup2.config.StoreConfig._getRefreshMediaDelay CedarBackup2.config.StoreConfig-class.html#_getRefreshMediaDelay CedarBackup2.config.StoreConfig._getNoEject CedarBackup2.config.StoreConfig-class.html#_getNoEject CedarBackup2.config.StoreConfig._getSourceDir CedarBackup2.config.StoreConfig-class.html#_getSourceDir CedarBackup2.config.StoreConfig._setCheckMedia CedarBackup2.config.StoreConfig-class.html#_setCheckMedia CedarBackup2.config.StoreConfig.__repr__ CedarBackup2.config.StoreConfig-class.html#__repr__ CedarBackup2.config.StoreConfig.ejectDelay CedarBackup2.config.StoreConfig-class.html#ejectDelay CedarBackup2.config.StoreConfig._setBlankBehavior CedarBackup2.config.StoreConfig-class.html#_setBlankBehavior CedarBackup2.config.StoreConfig._getCheckData CedarBackup2.config.StoreConfig-class.html#_getCheckData CedarBackup2.extend.capacity.CapacityConfig CedarBackup2.extend.capacity.CapacityConfig-class.html CedarBackup2.extend.capacity.CapacityConfig._setMaxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#_setMaxPercentage CedarBackup2.extend.capacity.CapacityConfig.__str__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__str__ CedarBackup2.extend.capacity.CapacityConfig.__cmp__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__cmp__ CedarBackup2.extend.capacity.CapacityConfig._getMaxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#_getMaxPercentage CedarBackup2.extend.capacity.CapacityConfig.__repr__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__repr__ CedarBackup2.extend.capacity.CapacityConfig.maxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#maxPercentage CedarBackup2.extend.capacity.CapacityConfig._setMinBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#_setMinBytes CedarBackup2.extend.capacity.CapacityConfig._getMinBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#_getMinBytes CedarBackup2.extend.capacity.CapacityConfig.minBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#minBytes CedarBackup2.extend.capacity.CapacityConfig.__init__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__init__ CedarBackup2.extend.capacity.LocalConfig CedarBackup2.extend.capacity.LocalConfig-class.html CedarBackup2.extend.capacity.LocalConfig.__str__ CedarBackup2.extend.capacity.LocalConfig-class.html#__str__ CedarBackup2.extend.capacity.LocalConfig._addPercentageQuantity CedarBackup2.extend.capacity.LocalConfig-class.html#_addPercentageQuantity CedarBackup2.extend.capacity.LocalConfig._parseXmlData CedarBackup2.extend.capacity.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.capacity.LocalConfig.__init__ CedarBackup2.extend.capacity.LocalConfig-class.html#__init__ CedarBackup2.extend.capacity.LocalConfig.capacity CedarBackup2.extend.capacity.LocalConfig-class.html#capacity CedarBackup2.extend.capacity.LocalConfig.__cmp__ CedarBackup2.extend.capacity.LocalConfig-class.html#__cmp__ CedarBackup2.extend.capacity.LocalConfig._readPercentageQuantity CedarBackup2.extend.capacity.LocalConfig-class.html#_readPercentageQuantity CedarBackup2.extend.capacity.LocalConfig._parseCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_parseCapacity CedarBackup2.extend.capacity.LocalConfig._getCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_getCapacity CedarBackup2.extend.capacity.LocalConfig.addConfig CedarBackup2.extend.capacity.LocalConfig-class.html#addConfig CedarBackup2.extend.capacity.LocalConfig.validate CedarBackup2.extend.capacity.LocalConfig-class.html#validate CedarBackup2.extend.capacity.LocalConfig.__repr__ CedarBackup2.extend.capacity.LocalConfig-class.html#__repr__ CedarBackup2.extend.capacity.LocalConfig._setCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_setCapacity CedarBackup2.extend.capacity.PercentageQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html CedarBackup2.extend.capacity.PercentageQuantity._setQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#_setQuantity CedarBackup2.extend.capacity.PercentageQuantity._getPercentage CedarBackup2.extend.capacity.PercentageQuantity-class.html#_getPercentage CedarBackup2.extend.capacity.PercentageQuantity.__str__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__str__ CedarBackup2.extend.capacity.PercentageQuantity.__cmp__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__cmp__ CedarBackup2.extend.capacity.PercentageQuantity.__repr__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__repr__ CedarBackup2.extend.capacity.PercentageQuantity._getQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#_getQuantity CedarBackup2.extend.capacity.PercentageQuantity.percentage CedarBackup2.extend.capacity.PercentageQuantity-class.html#percentage CedarBackup2.extend.capacity.PercentageQuantity.__init__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__init__ CedarBackup2.extend.capacity.PercentageQuantity.quantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#quantity CedarBackup2.extend.encrypt.EncryptConfig CedarBackup2.extend.encrypt.EncryptConfig-class.html CedarBackup2.extend.encrypt.EncryptConfig._getEncryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#_getEncryptMode CedarBackup2.extend.encrypt.EncryptConfig.encryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#encryptMode CedarBackup2.extend.encrypt.EncryptConfig.__str__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__str__ CedarBackup2.extend.encrypt.EncryptConfig.__cmp__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__cmp__ CedarBackup2.extend.encrypt.EncryptConfig._setEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#_setEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig.__init__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__init__ CedarBackup2.extend.encrypt.EncryptConfig.encryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#encryptTarget CedarBackup2.extend.encrypt.EncryptConfig._setEncryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#_setEncryptMode CedarBackup2.extend.encrypt.EncryptConfig._getEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#_getEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig.__repr__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__repr__ CedarBackup2.extend.encrypt.LocalConfig CedarBackup2.extend.encrypt.LocalConfig-class.html CedarBackup2.extend.encrypt.LocalConfig.__str__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__str__ CedarBackup2.extend.encrypt.LocalConfig._parseXmlData CedarBackup2.extend.encrypt.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.encrypt.LocalConfig.__init__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__init__ CedarBackup2.extend.encrypt.LocalConfig._parseEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_parseEncrypt CedarBackup2.extend.encrypt.LocalConfig.encrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#encrypt CedarBackup2.extend.encrypt.LocalConfig._getEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_getEncrypt CedarBackup2.extend.encrypt.LocalConfig.__cmp__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__cmp__ CedarBackup2.extend.encrypt.LocalConfig.addConfig CedarBackup2.extend.encrypt.LocalConfig-class.html#addConfig CedarBackup2.extend.encrypt.LocalConfig.validate CedarBackup2.extend.encrypt.LocalConfig-class.html#validate CedarBackup2.extend.encrypt.LocalConfig._setEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_setEncrypt CedarBackup2.extend.encrypt.LocalConfig.__repr__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__repr__ CedarBackup2.extend.mbox.LocalConfig CedarBackup2.extend.mbox.LocalConfig-class.html CedarBackup2.extend.mbox.LocalConfig.__str__ CedarBackup2.extend.mbox.LocalConfig-class.html#__str__ CedarBackup2.extend.mbox.LocalConfig._parseXmlData CedarBackup2.extend.mbox.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.mbox.LocalConfig.__init__ CedarBackup2.extend.mbox.LocalConfig-class.html#__init__ CedarBackup2.extend.mbox.LocalConfig.__cmp__ CedarBackup2.extend.mbox.LocalConfig-class.html#__cmp__ CedarBackup2.extend.mbox.LocalConfig.addConfig CedarBackup2.extend.mbox.LocalConfig-class.html#addConfig CedarBackup2.extend.mbox.LocalConfig.validate CedarBackup2.extend.mbox.LocalConfig-class.html#validate CedarBackup2.extend.mbox.LocalConfig._addMboxDir CedarBackup2.extend.mbox.LocalConfig-class.html#_addMboxDir CedarBackup2.extend.mbox.LocalConfig._parseMboxFiles CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMboxFiles CedarBackup2.extend.mbox.LocalConfig._getMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_getMbox CedarBackup2.extend.mbox.LocalConfig._addMboxFile CedarBackup2.extend.mbox.LocalConfig-class.html#_addMboxFile CedarBackup2.extend.mbox.LocalConfig._parseExclusions CedarBackup2.extend.mbox.LocalConfig-class.html#_parseExclusions CedarBackup2.extend.mbox.LocalConfig._setMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_setMbox CedarBackup2.extend.mbox.LocalConfig._parseMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMbox CedarBackup2.extend.mbox.LocalConfig.__repr__ CedarBackup2.extend.mbox.LocalConfig-class.html#__repr__ CedarBackup2.extend.mbox.LocalConfig.mbox CedarBackup2.extend.mbox.LocalConfig-class.html#mbox CedarBackup2.extend.mbox.LocalConfig._parseMboxDirs CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMboxDirs CedarBackup2.extend.mbox.MboxConfig CedarBackup2.extend.mbox.MboxConfig-class.html CedarBackup2.extend.mbox.MboxConfig._getCollectMode CedarBackup2.extend.mbox.MboxConfig-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxConfig.mboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#mboxFiles CedarBackup2.extend.mbox.MboxConfig.__str__ CedarBackup2.extend.mbox.MboxConfig-class.html#__str__ CedarBackup2.extend.mbox.MboxConfig.__init__ CedarBackup2.extend.mbox.MboxConfig-class.html#__init__ CedarBackup2.extend.mbox.MboxConfig._setCollectMode CedarBackup2.extend.mbox.MboxConfig-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxConfig._getMboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#_getMboxFiles CedarBackup2.extend.mbox.MboxConfig.__cmp__ CedarBackup2.extend.mbox.MboxConfig-class.html#__cmp__ CedarBackup2.extend.mbox.MboxConfig._setMboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#_setMboxFiles CedarBackup2.extend.mbox.MboxConfig.compressMode CedarBackup2.extend.mbox.MboxConfig-class.html#compressMode CedarBackup2.extend.mbox.MboxConfig._getMboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#_getMboxDirs CedarBackup2.extend.mbox.MboxConfig._setCompressMode CedarBackup2.extend.mbox.MboxConfig-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxConfig._setMboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#_setMboxDirs CedarBackup2.extend.mbox.MboxConfig.mboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#mboxDirs CedarBackup2.extend.mbox.MboxConfig.collectMode CedarBackup2.extend.mbox.MboxConfig-class.html#collectMode CedarBackup2.extend.mbox.MboxConfig._getCompressMode CedarBackup2.extend.mbox.MboxConfig-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxConfig.__repr__ CedarBackup2.extend.mbox.MboxConfig-class.html#__repr__ CedarBackup2.extend.mbox.MboxDir CedarBackup2.extend.mbox.MboxDir-class.html CedarBackup2.extend.mbox.MboxDir._getCollectMode CedarBackup2.extend.mbox.MboxDir-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxDir._getCompressMode CedarBackup2.extend.mbox.MboxDir-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxDir.__str__ CedarBackup2.extend.mbox.MboxDir-class.html#__str__ CedarBackup2.extend.mbox.MboxDir._getAbsolutePath CedarBackup2.extend.mbox.MboxDir-class.html#_getAbsolutePath CedarBackup2.extend.mbox.MboxDir._setExcludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#_setExcludePatterns CedarBackup2.extend.mbox.MboxDir.__init__ CedarBackup2.extend.mbox.MboxDir-class.html#__init__ CedarBackup2.extend.mbox.MboxDir._setCollectMode CedarBackup2.extend.mbox.MboxDir-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxDir.absolutePath CedarBackup2.extend.mbox.MboxDir-class.html#absolutePath CedarBackup2.extend.mbox.MboxDir.__cmp__ CedarBackup2.extend.mbox.MboxDir-class.html#__cmp__ CedarBackup2.extend.mbox.MboxDir.relativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#relativeExcludePaths CedarBackup2.extend.mbox.MboxDir.compressMode CedarBackup2.extend.mbox.MboxDir-class.html#compressMode CedarBackup2.extend.mbox.MboxDir._getRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#_getRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir._setCompressMode CedarBackup2.extend.mbox.MboxDir-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxDir._setRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#_setRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir.collectMode CedarBackup2.extend.mbox.MboxDir-class.html#collectMode CedarBackup2.extend.mbox.MboxDir._getExcludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#_getExcludePatterns CedarBackup2.extend.mbox.MboxDir.excludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#excludePatterns CedarBackup2.extend.mbox.MboxDir._setAbsolutePath CedarBackup2.extend.mbox.MboxDir-class.html#_setAbsolutePath CedarBackup2.extend.mbox.MboxDir.__repr__ CedarBackup2.extend.mbox.MboxDir-class.html#__repr__ CedarBackup2.extend.mbox.MboxFile CedarBackup2.extend.mbox.MboxFile-class.html CedarBackup2.extend.mbox.MboxFile._getCollectMode CedarBackup2.extend.mbox.MboxFile-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxFile.__str__ CedarBackup2.extend.mbox.MboxFile-class.html#__str__ CedarBackup2.extend.mbox.MboxFile._getAbsolutePath CedarBackup2.extend.mbox.MboxFile-class.html#_getAbsolutePath CedarBackup2.extend.mbox.MboxFile.__init__ CedarBackup2.extend.mbox.MboxFile-class.html#__init__ CedarBackup2.extend.mbox.MboxFile._setCollectMode CedarBackup2.extend.mbox.MboxFile-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxFile.absolutePath CedarBackup2.extend.mbox.MboxFile-class.html#absolutePath CedarBackup2.extend.mbox.MboxFile.__cmp__ CedarBackup2.extend.mbox.MboxFile-class.html#__cmp__ CedarBackup2.extend.mbox.MboxFile.compressMode CedarBackup2.extend.mbox.MboxFile-class.html#compressMode CedarBackup2.extend.mbox.MboxFile._setCompressMode CedarBackup2.extend.mbox.MboxFile-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxFile.collectMode CedarBackup2.extend.mbox.MboxFile-class.html#collectMode CedarBackup2.extend.mbox.MboxFile._getCompressMode CedarBackup2.extend.mbox.MboxFile-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxFile._setAbsolutePath CedarBackup2.extend.mbox.MboxFile-class.html#_setAbsolutePath CedarBackup2.extend.mbox.MboxFile.__repr__ CedarBackup2.extend.mbox.MboxFile-class.html#__repr__ CedarBackup2.extend.mysql.LocalConfig CedarBackup2.extend.mysql.LocalConfig-class.html CedarBackup2.extend.mysql.LocalConfig.__str__ CedarBackup2.extend.mysql.LocalConfig-class.html#__str__ CedarBackup2.extend.mysql.LocalConfig.mysql CedarBackup2.extend.mysql.LocalConfig-class.html#mysql CedarBackup2.extend.mysql.LocalConfig._parseMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_parseMysql CedarBackup2.extend.mysql.LocalConfig.__init__ CedarBackup2.extend.mysql.LocalConfig-class.html#__init__ CedarBackup2.extend.mysql.LocalConfig.__cmp__ CedarBackup2.extend.mysql.LocalConfig-class.html#__cmp__ CedarBackup2.extend.mysql.LocalConfig._setMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_setMysql CedarBackup2.extend.mysql.LocalConfig._parseXmlData CedarBackup2.extend.mysql.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.mysql.LocalConfig._getMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_getMysql CedarBackup2.extend.mysql.LocalConfig.addConfig CedarBackup2.extend.mysql.LocalConfig-class.html#addConfig CedarBackup2.extend.mysql.LocalConfig.validate CedarBackup2.extend.mysql.LocalConfig-class.html#validate CedarBackup2.extend.mysql.LocalConfig.__repr__ CedarBackup2.extend.mysql.LocalConfig-class.html#__repr__ CedarBackup2.extend.mysql.MysqlConfig CedarBackup2.extend.mysql.MysqlConfig-class.html CedarBackup2.extend.mysql.MysqlConfig.all CedarBackup2.extend.mysql.MysqlConfig-class.html#all CedarBackup2.extend.mysql.MysqlConfig.__str__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__str__ CedarBackup2.extend.mysql.MysqlConfig._setAll CedarBackup2.extend.mysql.MysqlConfig-class.html#_setAll CedarBackup2.extend.mysql.MysqlConfig.__init__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__init__ CedarBackup2.extend.mysql.MysqlConfig._setDatabases CedarBackup2.extend.mysql.MysqlConfig-class.html#_setDatabases CedarBackup2.extend.mysql.MysqlConfig._getAll CedarBackup2.extend.mysql.MysqlConfig-class.html#_getAll CedarBackup2.extend.mysql.MysqlConfig.__cmp__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__cmp__ CedarBackup2.extend.mysql.MysqlConfig._setPassword CedarBackup2.extend.mysql.MysqlConfig-class.html#_setPassword CedarBackup2.extend.mysql.MysqlConfig._getUser CedarBackup2.extend.mysql.MysqlConfig-class.html#_getUser CedarBackup2.extend.mysql.MysqlConfig._setUser CedarBackup2.extend.mysql.MysqlConfig-class.html#_setUser CedarBackup2.extend.mysql.MysqlConfig.compressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#compressMode CedarBackup2.extend.mysql.MysqlConfig._getPassword CedarBackup2.extend.mysql.MysqlConfig-class.html#_getPassword CedarBackup2.extend.mysql.MysqlConfig.user CedarBackup2.extend.mysql.MysqlConfig-class.html#user CedarBackup2.extend.mysql.MysqlConfig._setCompressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#_setCompressMode CedarBackup2.extend.mysql.MysqlConfig.password CedarBackup2.extend.mysql.MysqlConfig-class.html#password CedarBackup2.extend.mysql.MysqlConfig._getCompressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#_getCompressMode CedarBackup2.extend.mysql.MysqlConfig._getDatabases CedarBackup2.extend.mysql.MysqlConfig-class.html#_getDatabases CedarBackup2.extend.mysql.MysqlConfig.__repr__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__repr__ CedarBackup2.extend.mysql.MysqlConfig.databases CedarBackup2.extend.mysql.MysqlConfig-class.html#databases CedarBackup2.extend.postgresql.LocalConfig CedarBackup2.extend.postgresql.LocalConfig-class.html CedarBackup2.extend.postgresql.LocalConfig.__str__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__str__ CedarBackup2.extend.postgresql.LocalConfig._parseXmlData CedarBackup2.extend.postgresql.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.postgresql.LocalConfig.__init__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__init__ CedarBackup2.extend.postgresql.LocalConfig._setPostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_setPostgresql CedarBackup2.extend.postgresql.LocalConfig.__cmp__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__cmp__ CedarBackup2.extend.postgresql.LocalConfig._parsePostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_parsePostgresql CedarBackup2.extend.postgresql.LocalConfig.addConfig CedarBackup2.extend.postgresql.LocalConfig-class.html#addConfig CedarBackup2.extend.postgresql.LocalConfig.validate CedarBackup2.extend.postgresql.LocalConfig-class.html#validate CedarBackup2.extend.postgresql.LocalConfig.postgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#postgresql CedarBackup2.extend.postgresql.LocalConfig._getPostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_getPostgresql CedarBackup2.extend.postgresql.LocalConfig.__repr__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig CedarBackup2.extend.postgresql.PostgresqlConfig-class.html CedarBackup2.extend.postgresql.PostgresqlConfig.all CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#all CedarBackup2.extend.postgresql.PostgresqlConfig.__str__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__str__ CedarBackup2.extend.postgresql.PostgresqlConfig._setAll CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setAll CedarBackup2.extend.postgresql.PostgresqlConfig.__init__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__init__ CedarBackup2.extend.postgresql.PostgresqlConfig._setDatabases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setDatabases CedarBackup2.extend.postgresql.PostgresqlConfig._getAll CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getAll CedarBackup2.extend.postgresql.PostgresqlConfig.__cmp__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__cmp__ CedarBackup2.extend.postgresql.PostgresqlConfig._getUser CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getUser CedarBackup2.extend.postgresql.PostgresqlConfig._setUser CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setUser CedarBackup2.extend.postgresql.PostgresqlConfig.compressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#compressMode CedarBackup2.extend.postgresql.PostgresqlConfig.user CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#user CedarBackup2.extend.postgresql.PostgresqlConfig._setCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig._getCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig._getDatabases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getDatabases CedarBackup2.extend.postgresql.PostgresqlConfig.__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig.databases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#databases CedarBackup2.extend.split.LocalConfig CedarBackup2.extend.split.LocalConfig-class.html CedarBackup2.extend.split.LocalConfig.__str__ CedarBackup2.extend.split.LocalConfig-class.html#__str__ CedarBackup2.extend.split.LocalConfig._getSplit CedarBackup2.extend.split.LocalConfig-class.html#_getSplit CedarBackup2.extend.split.LocalConfig._parseXmlData CedarBackup2.extend.split.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.split.LocalConfig.__init__ CedarBackup2.extend.split.LocalConfig-class.html#__init__ CedarBackup2.extend.split.LocalConfig.__cmp__ CedarBackup2.extend.split.LocalConfig-class.html#__cmp__ CedarBackup2.extend.split.LocalConfig._setSplit CedarBackup2.extend.split.LocalConfig-class.html#_setSplit CedarBackup2.extend.split.LocalConfig.split CedarBackup2.extend.split.LocalConfig-class.html#split CedarBackup2.extend.split.LocalConfig.addConfig CedarBackup2.extend.split.LocalConfig-class.html#addConfig CedarBackup2.extend.split.LocalConfig.validate CedarBackup2.extend.split.LocalConfig-class.html#validate CedarBackup2.extend.split.LocalConfig.__repr__ CedarBackup2.extend.split.LocalConfig-class.html#__repr__ CedarBackup2.extend.split.LocalConfig._parseSplit CedarBackup2.extend.split.LocalConfig-class.html#_parseSplit CedarBackup2.extend.split.SplitConfig CedarBackup2.extend.split.SplitConfig-class.html CedarBackup2.extend.split.SplitConfig.splitSize CedarBackup2.extend.split.SplitConfig-class.html#splitSize CedarBackup2.extend.split.SplitConfig.__str__ CedarBackup2.extend.split.SplitConfig-class.html#__str__ CedarBackup2.extend.split.SplitConfig._setSplitSize CedarBackup2.extend.split.SplitConfig-class.html#_setSplitSize CedarBackup2.extend.split.SplitConfig._setSizeLimit CedarBackup2.extend.split.SplitConfig-class.html#_setSizeLimit CedarBackup2.extend.split.SplitConfig.__cmp__ CedarBackup2.extend.split.SplitConfig-class.html#__cmp__ CedarBackup2.extend.split.SplitConfig._getSplitSize CedarBackup2.extend.split.SplitConfig-class.html#_getSplitSize CedarBackup2.extend.split.SplitConfig.__repr__ CedarBackup2.extend.split.SplitConfig-class.html#__repr__ CedarBackup2.extend.split.SplitConfig.sizeLimit CedarBackup2.extend.split.SplitConfig-class.html#sizeLimit CedarBackup2.extend.split.SplitConfig._getSizeLimit CedarBackup2.extend.split.SplitConfig-class.html#_getSizeLimit CedarBackup2.extend.split.SplitConfig.__init__ CedarBackup2.extend.split.SplitConfig-class.html#__init__ CedarBackup2.extend.subversion.BDBRepository CedarBackup2.extend.subversion.BDBRepository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.BDBRepository.__init__ CedarBackup2.extend.subversion.BDBRepository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.BDBRepository.__repr__ CedarBackup2.extend.subversion.BDBRepository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.FSFSRepository CedarBackup2.extend.subversion.FSFSRepository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.FSFSRepository.__init__ CedarBackup2.extend.subversion.FSFSRepository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.FSFSRepository.__repr__ CedarBackup2.extend.subversion.FSFSRepository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.LocalConfig CedarBackup2.extend.subversion.LocalConfig-class.html CedarBackup2.extend.subversion.LocalConfig._getSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_getSubversion CedarBackup2.extend.subversion.LocalConfig.__str__ CedarBackup2.extend.subversion.LocalConfig-class.html#__str__ CedarBackup2.extend.subversion.LocalConfig._parseXmlData CedarBackup2.extend.subversion.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.subversion.LocalConfig.__init__ CedarBackup2.extend.subversion.LocalConfig-class.html#__init__ CedarBackup2.extend.subversion.LocalConfig.__cmp__ CedarBackup2.extend.subversion.LocalConfig-class.html#__cmp__ CedarBackup2.extend.subversion.LocalConfig.subversion CedarBackup2.extend.subversion.LocalConfig-class.html#subversion CedarBackup2.extend.subversion.LocalConfig._parseRepositories CedarBackup2.extend.subversion.LocalConfig-class.html#_parseRepositories CedarBackup2.extend.subversion.LocalConfig._setSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_setSubversion CedarBackup2.extend.subversion.LocalConfig._parseSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_parseSubversion CedarBackup2.extend.subversion.LocalConfig.addConfig CedarBackup2.extend.subversion.LocalConfig-class.html#addConfig CedarBackup2.extend.subversion.LocalConfig.validate CedarBackup2.extend.subversion.LocalConfig-class.html#validate CedarBackup2.extend.subversion.LocalConfig._addRepository CedarBackup2.extend.subversion.LocalConfig-class.html#_addRepository CedarBackup2.extend.subversion.LocalConfig._parseExclusions CedarBackup2.extend.subversion.LocalConfig-class.html#_parseExclusions CedarBackup2.extend.subversion.LocalConfig.__repr__ CedarBackup2.extend.subversion.LocalConfig-class.html#__repr__ CedarBackup2.extend.subversion.LocalConfig._parseRepositoryDirs CedarBackup2.extend.subversion.LocalConfig-class.html#_parseRepositoryDirs CedarBackup2.extend.subversion.LocalConfig._addRepositoryDir CedarBackup2.extend.subversion.LocalConfig-class.html#_addRepositoryDir CedarBackup2.extend.subversion.Repository CedarBackup2.extend.subversion.Repository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.Repository.__init__ CedarBackup2.extend.subversion.Repository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.Repository.__repr__ CedarBackup2.extend.subversion.Repository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.RepositoryDir CedarBackup2.extend.subversion.RepositoryDir-class.html CedarBackup2.extend.subversion.RepositoryDir.directoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#directoryPath CedarBackup2.extend.subversion.RepositoryDir._getCollectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_getCollectMode CedarBackup2.extend.subversion.RepositoryDir._getCompressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_getCompressMode CedarBackup2.extend.subversion.RepositoryDir.repositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#repositoryType CedarBackup2.extend.subversion.RepositoryDir._setExcludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#_setExcludePatterns CedarBackup2.extend.subversion.RepositoryDir.__init__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__init__ CedarBackup2.extend.subversion.RepositoryDir._setRepositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#_setRepositoryType CedarBackup2.extend.subversion.RepositoryDir.__cmp__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__cmp__ CedarBackup2.extend.subversion.RepositoryDir._setCollectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_setCollectMode CedarBackup2.extend.subversion.RepositoryDir.__str__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__str__ CedarBackup2.extend.subversion.RepositoryDir.relativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#relativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir.compressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#compressMode CedarBackup2.extend.subversion.RepositoryDir._getRepositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#_getRepositoryType CedarBackup2.extend.subversion.RepositoryDir._getRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#_getRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir._setDirectoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#_setDirectoryPath CedarBackup2.extend.subversion.RepositoryDir._setCompressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_setCompressMode CedarBackup2.extend.subversion.RepositoryDir._setRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#_setRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir.collectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#collectMode CedarBackup2.extend.subversion.RepositoryDir._getExcludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#_getExcludePatterns CedarBackup2.extend.subversion.RepositoryDir.excludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#excludePatterns CedarBackup2.extend.subversion.RepositoryDir.__repr__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__repr__ CedarBackup2.extend.subversion.RepositoryDir._getDirectoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#_getDirectoryPath CedarBackup2.extend.subversion.SubversionConfig CedarBackup2.extend.subversion.SubversionConfig-class.html CedarBackup2.extend.subversion.SubversionConfig._getCollectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_getCollectMode CedarBackup2.extend.subversion.SubversionConfig._getCompressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_getCompressMode CedarBackup2.extend.subversion.SubversionConfig.__str__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__str__ CedarBackup2.extend.subversion.SubversionConfig._getRepositories CedarBackup2.extend.subversion.SubversionConfig-class.html#_getRepositories CedarBackup2.extend.subversion.SubversionConfig.__init__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__init__ CedarBackup2.extend.subversion.SubversionConfig._setCollectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_setCollectMode CedarBackup2.extend.subversion.SubversionConfig.__cmp__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__cmp__ CedarBackup2.extend.subversion.SubversionConfig.repositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#repositoryDirs CedarBackup2.extend.subversion.SubversionConfig.compressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#compressMode CedarBackup2.extend.subversion.SubversionConfig._setCompressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_setCompressMode CedarBackup2.extend.subversion.SubversionConfig._getRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#_getRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig.collectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#collectMode CedarBackup2.extend.subversion.SubversionConfig.repositories CedarBackup2.extend.subversion.SubversionConfig-class.html#repositories CedarBackup2.extend.subversion.SubversionConfig._setRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#_setRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig.__repr__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__repr__ CedarBackup2.extend.subversion.SubversionConfig._setRepositories CedarBackup2.extend.subversion.SubversionConfig-class.html#_setRepositories CedarBackup2.filesystem.BackupFileList CedarBackup2.filesystem.BackupFileList-class.html CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.BackupFileList.removeUnchanged CedarBackup2.filesystem.BackupFileList-class.html#removeUnchanged CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.BackupFileList.generateFitted CedarBackup2.filesystem.BackupFileList-class.html#generateFitted CedarBackup2.filesystem.FilesystemList.addDirContents CedarBackup2.filesystem.FilesystemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.BackupFileList.generateSizeMap CedarBackup2.filesystem.BackupFileList-class.html#generateSizeMap CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.BackupFileList.totalSize CedarBackup2.filesystem.BackupFileList-class.html#totalSize CedarBackup2.filesystem.BackupFileList.addDir CedarBackup2.filesystem.BackupFileList-class.html#addDir CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.BackupFileList.generateTarfile CedarBackup2.filesystem.BackupFileList-class.html#generateTarfile CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.BackupFileList._getKnapsackFunction CedarBackup2.filesystem.BackupFileList-class.html#_getKnapsackFunction CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.BackupFileList.generateDigestMap CedarBackup2.filesystem.BackupFileList-class.html#generateDigestMap CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.BackupFileList.__init__ CedarBackup2.filesystem.BackupFileList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.BackupFileList.generateSpan CedarBackup2.filesystem.BackupFileList-class.html#generateSpan CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.BackupFileList._getKnapsackTable CedarBackup2.filesystem.BackupFileList-class.html#_getKnapsackTable CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.BackupFileList._generateDigest CedarBackup2.filesystem.BackupFileList-class.html#_generateDigest CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.FilesystemList CedarBackup2.filesystem.FilesystemList-class.html CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.FilesystemList.__init__ CedarBackup2.filesystem.FilesystemList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.addDir CedarBackup2.filesystem.FilesystemList-class.html#addDir CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.addDirContents CedarBackup2.filesystem.FilesystemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.PurgeItemList CedarBackup2.filesystem.PurgeItemList-class.html CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.PurgeItemList.__init__ CedarBackup2.filesystem.PurgeItemList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.addDir CedarBackup2.filesystem.FilesystemList-class.html#addDir CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.PurgeItemList.removeYoungFiles CedarBackup2.filesystem.PurgeItemList-class.html#removeYoungFiles CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.PurgeItemList.purgeItems CedarBackup2.filesystem.PurgeItemList-class.html#purgeItems CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.PurgeItemList.addDirContents CedarBackup2.filesystem.PurgeItemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.SpanItem CedarBackup2.filesystem.SpanItem-class.html CedarBackup2.filesystem.SpanItem.__init__ CedarBackup2.filesystem.SpanItem-class.html#__init__ CedarBackup2.peer.LocalPeer CedarBackup2.peer.LocalPeer-class.html CedarBackup2.peer.LocalPeer._copyLocalFile CedarBackup2.peer.LocalPeer-class.html#_copyLocalFile CedarBackup2.peer.LocalPeer._setIgnoreFailureMode CedarBackup2.peer.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup2.peer.LocalPeer._getName CedarBackup2.peer.LocalPeer-class.html#_getName CedarBackup2.peer.LocalPeer.__init__ CedarBackup2.peer.LocalPeer-class.html#__init__ CedarBackup2.peer.LocalPeer.checkCollectIndicator CedarBackup2.peer.LocalPeer-class.html#checkCollectIndicator CedarBackup2.peer.LocalPeer.writeStageIndicator CedarBackup2.peer.LocalPeer-class.html#writeStageIndicator CedarBackup2.peer.LocalPeer._getIgnoreFailureMode CedarBackup2.peer.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup2.peer.LocalPeer._copyLocalDir CedarBackup2.peer.LocalPeer-class.html#_copyLocalDir CedarBackup2.peer.LocalPeer.ignoreFailureMode CedarBackup2.peer.LocalPeer-class.html#ignoreFailureMode CedarBackup2.peer.LocalPeer._getCollectDir CedarBackup2.peer.LocalPeer-class.html#_getCollectDir CedarBackup2.peer.LocalPeer.name CedarBackup2.peer.LocalPeer-class.html#name CedarBackup2.peer.LocalPeer.collectDir CedarBackup2.peer.LocalPeer-class.html#collectDir CedarBackup2.peer.LocalPeer._setCollectDir CedarBackup2.peer.LocalPeer-class.html#_setCollectDir CedarBackup2.peer.LocalPeer.stagePeer CedarBackup2.peer.LocalPeer-class.html#stagePeer CedarBackup2.peer.LocalPeer._setName CedarBackup2.peer.LocalPeer-class.html#_setName CedarBackup2.peer.RemotePeer CedarBackup2.peer.RemotePeer-class.html CedarBackup2.peer.RemotePeer._getWorkingDir CedarBackup2.peer.RemotePeer-class.html#_getWorkingDir CedarBackup2.peer.RemotePeer._setLocalUser CedarBackup2.peer.RemotePeer-class.html#_setLocalUser CedarBackup2.peer.RemotePeer._getLocalUser CedarBackup2.peer.RemotePeer-class.html#_getLocalUser CedarBackup2.peer.RemotePeer._getRcpCommand CedarBackup2.peer.RemotePeer-class.html#_getRcpCommand CedarBackup2.peer.RemotePeer._copyRemoteFile CedarBackup2.peer.RemotePeer-class.html#_copyRemoteFile CedarBackup2.peer.RemotePeer._buildCbackCommand CedarBackup2.peer.RemotePeer-class.html#_buildCbackCommand CedarBackup2.peer.RemotePeer.cbackCommand CedarBackup2.peer.RemotePeer-class.html#cbackCommand CedarBackup2.peer.RemotePeer._setIgnoreFailureMode CedarBackup2.peer.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup2.peer.RemotePeer.localUser CedarBackup2.peer.RemotePeer-class.html#localUser CedarBackup2.peer.RemotePeer.executeRemoteCommand CedarBackup2.peer.RemotePeer-class.html#executeRemoteCommand CedarBackup2.peer.RemotePeer._getName CedarBackup2.peer.RemotePeer-class.html#_getName CedarBackup2.peer.RemotePeer.__init__ CedarBackup2.peer.RemotePeer-class.html#__init__ CedarBackup2.peer.RemotePeer.writeStageIndicator CedarBackup2.peer.RemotePeer-class.html#writeStageIndicator CedarBackup2.peer.RemotePeer._setCbackCommand CedarBackup2.peer.RemotePeer-class.html#_setCbackCommand CedarBackup2.peer.RemotePeer._getCbackCommand CedarBackup2.peer.RemotePeer-class.html#_getCbackCommand CedarBackup2.peer.RemotePeer.remoteUser CedarBackup2.peer.RemotePeer-class.html#remoteUser CedarBackup2.peer.RemotePeer.workingDir CedarBackup2.peer.RemotePeer-class.html#workingDir CedarBackup2.peer.RemotePeer.checkCollectIndicator CedarBackup2.peer.RemotePeer-class.html#checkCollectIndicator CedarBackup2.peer.RemotePeer._getDirContents CedarBackup2.peer.RemotePeer-class.html#_getDirContents CedarBackup2.peer.RemotePeer._copyRemoteDir CedarBackup2.peer.RemotePeer-class.html#_copyRemoteDir CedarBackup2.peer.RemotePeer.executeManagedAction CedarBackup2.peer.RemotePeer-class.html#executeManagedAction CedarBackup2.peer.RemotePeer._getIgnoreFailureMode CedarBackup2.peer.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup2.peer.RemotePeer.ignoreFailureMode CedarBackup2.peer.RemotePeer-class.html#ignoreFailureMode CedarBackup2.peer.RemotePeer._setWorkingDir CedarBackup2.peer.RemotePeer-class.html#_setWorkingDir CedarBackup2.peer.RemotePeer.rcpCommand CedarBackup2.peer.RemotePeer-class.html#rcpCommand CedarBackup2.peer.RemotePeer.rshCommand CedarBackup2.peer.RemotePeer-class.html#rshCommand CedarBackup2.peer.RemotePeer.name CedarBackup2.peer.RemotePeer-class.html#name CedarBackup2.peer.RemotePeer._getCollectDir CedarBackup2.peer.RemotePeer-class.html#_getCollectDir CedarBackup2.peer.RemotePeer._setRemoteUser CedarBackup2.peer.RemotePeer-class.html#_setRemoteUser CedarBackup2.peer.RemotePeer._setRcpCommand CedarBackup2.peer.RemotePeer-class.html#_setRcpCommand CedarBackup2.peer.RemotePeer._executeRemoteCommand CedarBackup2.peer.RemotePeer-class.html#_executeRemoteCommand CedarBackup2.peer.RemotePeer.collectDir CedarBackup2.peer.RemotePeer-class.html#collectDir CedarBackup2.peer.RemotePeer._setCollectDir CedarBackup2.peer.RemotePeer-class.html#_setCollectDir CedarBackup2.peer.RemotePeer._getRemoteUser CedarBackup2.peer.RemotePeer-class.html#_getRemoteUser CedarBackup2.peer.RemotePeer.stagePeer CedarBackup2.peer.RemotePeer-class.html#stagePeer CedarBackup2.peer.RemotePeer._pushLocalFile CedarBackup2.peer.RemotePeer-class.html#_pushLocalFile CedarBackup2.peer.RemotePeer._setName CedarBackup2.peer.RemotePeer-class.html#_setName CedarBackup2.peer.RemotePeer._getRshCommand CedarBackup2.peer.RemotePeer-class.html#_getRshCommand CedarBackup2.peer.RemotePeer._setRshCommand CedarBackup2.peer.RemotePeer-class.html#_setRshCommand CedarBackup2.tools.span.SpanOptions CedarBackup2.tools.span.SpanOptions-class.html CedarBackup2.cli.Options._getMode CedarBackup2.cli.Options-class.html#_getMode CedarBackup2.cli.Options.stacktrace CedarBackup2.cli.Options-class.html#stacktrace CedarBackup2.cli.Options.managed CedarBackup2.cli.Options-class.html#managed CedarBackup2.cli.Options.help CedarBackup2.cli.Options-class.html#help CedarBackup2.cli.Options._getFull CedarBackup2.cli.Options-class.html#_getFull CedarBackup2.cli.Options.__str__ CedarBackup2.cli.Options-class.html#__str__ CedarBackup2.cli.Options._setStacktrace CedarBackup2.cli.Options-class.html#_setStacktrace CedarBackup2.cli.Options.actions CedarBackup2.cli.Options-class.html#actions CedarBackup2.cli.Options.owner CedarBackup2.cli.Options-class.html#owner CedarBackup2.cli.Options._setQuiet CedarBackup2.cli.Options-class.html#_setQuiet CedarBackup2.cli.Options._setVersion CedarBackup2.cli.Options-class.html#_setVersion CedarBackup2.cli.Options._getVerbose CedarBackup2.cli.Options-class.html#_getVerbose CedarBackup2.cli.Options.verbose CedarBackup2.cli.Options-class.html#verbose CedarBackup2.cli.Options._setHelp CedarBackup2.cli.Options-class.html#_setHelp CedarBackup2.cli.Options._getDiagnostics CedarBackup2.cli.Options-class.html#_getDiagnostics CedarBackup2.cli.Options._getDebug CedarBackup2.cli.Options-class.html#_getDebug CedarBackup2.cli.Options._parseArgumentList CedarBackup2.cli.Options-class.html#_parseArgumentList CedarBackup2.cli.Options.buildArgumentList CedarBackup2.cli.Options-class.html#buildArgumentList CedarBackup2.cli.Options._getManagedOnly CedarBackup2.cli.Options-class.html#_getManagedOnly CedarBackup2.cli.Options.__cmp__ CedarBackup2.cli.Options-class.html#__cmp__ CedarBackup2.cli.Options._setOutput CedarBackup2.cli.Options-class.html#_setOutput CedarBackup2.cli.Options._setOwner CedarBackup2.cli.Options-class.html#_setOwner CedarBackup2.cli.Options._setMode CedarBackup2.cli.Options-class.html#_setMode CedarBackup2.cli.Options.__init__ CedarBackup2.cli.Options-class.html#__init__ CedarBackup2.cli.Options._getQuiet CedarBackup2.cli.Options-class.html#_getQuiet CedarBackup2.cli.Options.managedOnly CedarBackup2.cli.Options-class.html#managedOnly CedarBackup2.cli.Options._getManaged CedarBackup2.cli.Options-class.html#_getManaged CedarBackup2.cli.Options.config CedarBackup2.cli.Options-class.html#config CedarBackup2.cli.Options.__repr__ CedarBackup2.cli.Options-class.html#__repr__ CedarBackup2.cli.Options._getVersion CedarBackup2.cli.Options-class.html#_getVersion CedarBackup2.cli.Options._getLogfile CedarBackup2.cli.Options-class.html#_getLogfile CedarBackup2.cli.Options.full CedarBackup2.cli.Options-class.html#full CedarBackup2.cli.Options._getConfig CedarBackup2.cli.Options-class.html#_getConfig CedarBackup2.cli.Options._getStacktrace CedarBackup2.cli.Options-class.html#_getStacktrace CedarBackup2.cli.Options._setFull CedarBackup2.cli.Options-class.html#_setFull CedarBackup2.cli.Options.version CedarBackup2.cli.Options-class.html#version CedarBackup2.cli.Options._setManagedOnly CedarBackup2.cli.Options-class.html#_setManagedOnly CedarBackup2.cli.Options._setDiagnostics CedarBackup2.cli.Options-class.html#_setDiagnostics CedarBackup2.cli.Options._setConfig CedarBackup2.cli.Options-class.html#_setConfig CedarBackup2.tools.span.SpanOptions.validate CedarBackup2.tools.span.SpanOptions-class.html#validate CedarBackup2.cli.Options.logfile CedarBackup2.cli.Options-class.html#logfile CedarBackup2.cli.Options.buildArgumentString CedarBackup2.cli.Options-class.html#buildArgumentString CedarBackup2.cli.Options._setDebug CedarBackup2.cli.Options-class.html#_setDebug CedarBackup2.cli.Options._setManaged CedarBackup2.cli.Options-class.html#_setManaged CedarBackup2.cli.Options._setActions CedarBackup2.cli.Options-class.html#_setActions CedarBackup2.cli.Options._getHelp CedarBackup2.cli.Options-class.html#_getHelp CedarBackup2.cli.Options._getOwner CedarBackup2.cli.Options-class.html#_getOwner CedarBackup2.cli.Options._setLogfile CedarBackup2.cli.Options-class.html#_setLogfile CedarBackup2.cli.Options.quiet CedarBackup2.cli.Options-class.html#quiet CedarBackup2.cli.Options.mode CedarBackup2.cli.Options-class.html#mode CedarBackup2.cli.Options.diagnostics CedarBackup2.cli.Options-class.html#diagnostics CedarBackup2.cli.Options.debug CedarBackup2.cli.Options-class.html#debug CedarBackup2.cli.Options.output CedarBackup2.cli.Options-class.html#output CedarBackup2.cli.Options._setVerbose CedarBackup2.cli.Options-class.html#_setVerbose CedarBackup2.cli.Options._getOutput CedarBackup2.cli.Options-class.html#_getOutput CedarBackup2.cli.Options._getActions CedarBackup2.cli.Options-class.html#_getActions CedarBackup2.util.AbsolutePathList CedarBackup2.util.AbsolutePathList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.AbsolutePathList.append CedarBackup2.util.AbsolutePathList-class.html#append CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.AbsolutePathList.extend CedarBackup2.util.AbsolutePathList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.AbsolutePathList.insert CedarBackup2.util.AbsolutePathList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.Diagnostics CedarBackup2.util.Diagnostics-class.html CedarBackup2.util.Diagnostics._getEncoding CedarBackup2.util.Diagnostics-class.html#_getEncoding CedarBackup2.util.Diagnostics.encoding CedarBackup2.util.Diagnostics-class.html#encoding CedarBackup2.util.Diagnostics.locale CedarBackup2.util.Diagnostics-class.html#locale CedarBackup2.util.Diagnostics.__str__ CedarBackup2.util.Diagnostics-class.html#__str__ CedarBackup2.util.Diagnostics.getValues CedarBackup2.util.Diagnostics-class.html#getValues CedarBackup2.util.Diagnostics.interpreter CedarBackup2.util.Diagnostics-class.html#interpreter CedarBackup2.util.Diagnostics.__init__ CedarBackup2.util.Diagnostics-class.html#__init__ CedarBackup2.util.Diagnostics.platform CedarBackup2.util.Diagnostics-class.html#platform CedarBackup2.util.Diagnostics.version CedarBackup2.util.Diagnostics-class.html#version CedarBackup2.util.Diagnostics.printDiagnostics CedarBackup2.util.Diagnostics-class.html#printDiagnostics CedarBackup2.util.Diagnostics._getVersion CedarBackup2.util.Diagnostics-class.html#_getVersion CedarBackup2.util.Diagnostics._getTimestamp CedarBackup2.util.Diagnostics-class.html#_getTimestamp CedarBackup2.util.Diagnostics.timestamp CedarBackup2.util.Diagnostics-class.html#timestamp CedarBackup2.util.Diagnostics._getPlatform CedarBackup2.util.Diagnostics-class.html#_getPlatform CedarBackup2.util.Diagnostics.logDiagnostics CedarBackup2.util.Diagnostics-class.html#logDiagnostics CedarBackup2.util.Diagnostics._buildDiagnosticLines CedarBackup2.util.Diagnostics-class.html#_buildDiagnosticLines CedarBackup2.util.Diagnostics._getInterpreter CedarBackup2.util.Diagnostics-class.html#_getInterpreter CedarBackup2.util.Diagnostics._getMaxLength CedarBackup2.util.Diagnostics-class.html#_getMaxLength CedarBackup2.util.Diagnostics._getLocale CedarBackup2.util.Diagnostics-class.html#_getLocale CedarBackup2.util.Diagnostics.__repr__ CedarBackup2.util.Diagnostics-class.html#__repr__ CedarBackup2.util.DirectedGraph CedarBackup2.util.DirectedGraph-class.html CedarBackup2.util.DirectedGraph._DISCOVERED CedarBackup2.util.DirectedGraph-class.html#_DISCOVERED CedarBackup2.util.DirectedGraph.__str__ CedarBackup2.util.DirectedGraph-class.html#__str__ CedarBackup2.util.DirectedGraph.topologicalSort CedarBackup2.util.DirectedGraph-class.html#topologicalSort CedarBackup2.util.DirectedGraph._EXPLORED CedarBackup2.util.DirectedGraph-class.html#_EXPLORED CedarBackup2.util.DirectedGraph._getName CedarBackup2.util.DirectedGraph-class.html#_getName CedarBackup2.util.DirectedGraph.__init__ CedarBackup2.util.DirectedGraph-class.html#__init__ CedarBackup2.util.DirectedGraph.__cmp__ CedarBackup2.util.DirectedGraph-class.html#__cmp__ CedarBackup2.util.DirectedGraph._UNDISCOVERED CedarBackup2.util.DirectedGraph-class.html#_UNDISCOVERED CedarBackup2.util.DirectedGraph.createVertex CedarBackup2.util.DirectedGraph-class.html#createVertex CedarBackup2.util.DirectedGraph._topologicalSort CedarBackup2.util.DirectedGraph-class.html#_topologicalSort CedarBackup2.util.DirectedGraph.createEdge CedarBackup2.util.DirectedGraph-class.html#createEdge CedarBackup2.util.DirectedGraph.name CedarBackup2.util.DirectedGraph-class.html#name CedarBackup2.util.DirectedGraph.__repr__ CedarBackup2.util.DirectedGraph-class.html#__repr__ CedarBackup2.util.ObjectTypeList CedarBackup2.util.ObjectTypeList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.ObjectTypeList.append CedarBackup2.util.ObjectTypeList-class.html#append CedarBackup2.util.ObjectTypeList.__init__ CedarBackup2.util.ObjectTypeList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.ObjectTypeList.extend CedarBackup2.util.ObjectTypeList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.ObjectTypeList.insert CedarBackup2.util.ObjectTypeList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.PathResolverSingleton CedarBackup2.util.PathResolverSingleton-class.html CedarBackup2.util.PathResolverSingleton._Helper CedarBackup2.util.PathResolverSingleton._Helper-class.html CedarBackup2.util.PathResolverSingleton.getInstance CedarBackup2.util.PathResolverSingleton-class.html#getInstance CedarBackup2.util.PathResolverSingleton._instance CedarBackup2.util.PathResolverSingleton-class.html#_instance CedarBackup2.util.PathResolverSingleton.lookup CedarBackup2.util.PathResolverSingleton-class.html#lookup CedarBackup2.util.PathResolverSingleton._mapping CedarBackup2.util.PathResolverSingleton-class.html#_mapping CedarBackup2.util.PathResolverSingleton.__init__ CedarBackup2.util.PathResolverSingleton-class.html#__init__ CedarBackup2.util.PathResolverSingleton.fill CedarBackup2.util.PathResolverSingleton-class.html#fill CedarBackup2.util.PathResolverSingleton._Helper CedarBackup2.util.PathResolverSingleton._Helper-class.html CedarBackup2.util.PathResolverSingleton._Helper.__call__ CedarBackup2.util.PathResolverSingleton._Helper-class.html#__call__ CedarBackup2.util.PathResolverSingleton._Helper.__init__ CedarBackup2.util.PathResolverSingleton._Helper-class.html#__init__ CedarBackup2.util.Pipe CedarBackup2.util.Pipe-class.html CedarBackup2.util.Pipe.__init__ CedarBackup2.util.Pipe-class.html#__init__ CedarBackup2.util.RegexList CedarBackup2.util.RegexList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RegexList.append CedarBackup2.util.RegexList-class.html#append CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RegexList.extend CedarBackup2.util.RegexList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RegexList.insert CedarBackup2.util.RegexList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.RegexMatchList CedarBackup2.util.RegexMatchList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RegexMatchList.append CedarBackup2.util.RegexMatchList-class.html#append CedarBackup2.util.RegexMatchList.__init__ CedarBackup2.util.RegexMatchList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RegexMatchList.extend CedarBackup2.util.RegexMatchList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RegexMatchList.insert CedarBackup2.util.RegexMatchList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.RestrictedContentList CedarBackup2.util.RestrictedContentList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RestrictedContentList.append CedarBackup2.util.RestrictedContentList-class.html#append CedarBackup2.util.RestrictedContentList.__init__ CedarBackup2.util.RestrictedContentList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RestrictedContentList.extend CedarBackup2.util.RestrictedContentList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RestrictedContentList.insert CedarBackup2.util.RestrictedContentList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.UnorderedList CedarBackup2.util.UnorderedList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util._Vertex CedarBackup2.util._Vertex-class.html CedarBackup2.util._Vertex.__init__ CedarBackup2.util._Vertex-class.html#__init__ CedarBackup2.writers.cdwriter.CdWriter CedarBackup2.writers.cdwriter.CdWriter-class.html CedarBackup2.writers.cdwriter.CdWriter._createImage CedarBackup2.writers.cdwriter.CdWriter-class.html#_createImage CedarBackup2.writers.cdwriter.CdWriter._calculateCapacity CedarBackup2.writers.cdwriter.CdWriter-class.html#_calculateCapacity CedarBackup2.writers.cdwriter.CdWriter._buildPropertiesArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildPropertiesArgs CedarBackup2.writers.cdwriter.CdWriter.writeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#writeImage CedarBackup2.writers.cdwriter.CdWriter.deviceHasTray CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceHasTray CedarBackup2.writers.cdwriter.CdWriter.openTray CedarBackup2.writers.cdwriter.CdWriter-class.html#openTray CedarBackup2.writers.cdwriter.CdWriter.addImageEntry CedarBackup2.writers.cdwriter.CdWriter-class.html#addImageEntry CedarBackup2.writers.cdwriter.CdWriter._buildWriteArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildWriteArgs CedarBackup2.writers.cdwriter.CdWriter.unlockTray CedarBackup2.writers.cdwriter.CdWriter-class.html#unlockTray CedarBackup2.writers.cdwriter.CdWriter._parseBoundariesOutput CedarBackup2.writers.cdwriter.CdWriter-class.html#_parseBoundariesOutput CedarBackup2.writers.cdwriter.CdWriter._getHardwareId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getHardwareId CedarBackup2.writers.cdwriter.CdWriter.refreshMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#refreshMedia CedarBackup2.writers.cdwriter.CdWriter.closeTray CedarBackup2.writers.cdwriter.CdWriter-class.html#closeTray CedarBackup2.writers.cdwriter.CdWriter.initializeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#initializeImage CedarBackup2.writers.cdwriter.CdWriter.deviceCanEject CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceCanEject CedarBackup2.writers.cdwriter.CdWriter.__init__ CedarBackup2.writers.cdwriter.CdWriter-class.html#__init__ CedarBackup2.writers.cdwriter.CdWriter.refreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#refreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter._buildCloseTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildCloseTrayArgs CedarBackup2.writers.cdwriter.CdWriter._getDeviceHasTray CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceHasTray CedarBackup2.writers.cdwriter.CdWriter.getEstimatedImageSize CedarBackup2.writers.cdwriter.CdWriter-class.html#getEstimatedImageSize CedarBackup2.writers.cdwriter.CdWriter.media CedarBackup2.writers.cdwriter.CdWriter-class.html#media CedarBackup2.writers.cdwriter.CdWriter._retrieveProperties CedarBackup2.writers.cdwriter.CdWriter-class.html#_retrieveProperties CedarBackup2.writers.cdwriter.CdWriter.deviceVendor CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceVendor CedarBackup2.writers.cdwriter.CdWriter.hardwareId CedarBackup2.writers.cdwriter.CdWriter-class.html#hardwareId CedarBackup2.writers.cdwriter.CdWriter._getDeviceCanEject CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceCanEject CedarBackup2.writers.cdwriter.CdWriter._getMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#_getMedia CedarBackup2.writers.cdwriter.CdWriter.isRewritable CedarBackup2.writers.cdwriter.CdWriter-class.html#isRewritable CedarBackup2.writers.cdwriter.CdWriter.deviceType CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceType CedarBackup2.writers.cdwriter.CdWriter.setImageNewDisc CedarBackup2.writers.cdwriter.CdWriter-class.html#setImageNewDisc CedarBackup2.writers.cdwriter.CdWriter.driveSpeed CedarBackup2.writers.cdwriter.CdWriter-class.html#driveSpeed CedarBackup2.writers.cdwriter.CdWriter._getDevice CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDevice CedarBackup2.writers.cdwriter.CdWriter.deviceBufferSize CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceBufferSize CedarBackup2.writers.cdwriter.CdWriter._getDeviceType CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceType CedarBackup2.writers.cdwriter.CdWriter._getDeviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter._getScsiId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getScsiId CedarBackup2.writers.cdwriter.CdWriter._buildBlankArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildBlankArgs CedarBackup2.writers.cdwriter.CdWriter._getDriveSpeed CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDriveSpeed CedarBackup2.writers.cdwriter.CdWriter._getDeviceVendor CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceVendor CedarBackup2.writers.cdwriter.CdWriter._writeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#_writeImage CedarBackup2.writers.cdwriter.CdWriter.deviceId CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceId CedarBackup2.writers.cdwriter.CdWriter._blankMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#_blankMedia CedarBackup2.writers.cdwriter.CdWriter._buildOpenTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildOpenTrayArgs CedarBackup2.writers.cdwriter.CdWriter._getDeviceBufferSize CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceBufferSize CedarBackup2.writers.cdwriter.CdWriter.deviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter._getEjectDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#_getEjectDelay CedarBackup2.writers.cdwriter.CdWriter._getRefreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#_getRefreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter.scsiId CedarBackup2.writers.cdwriter.CdWriter-class.html#scsiId CedarBackup2.writers.cdwriter.CdWriter._buildUnlockTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildUnlockTrayArgs CedarBackup2.writers.cdwriter.CdWriter.device CedarBackup2.writers.cdwriter.CdWriter-class.html#device CedarBackup2.writers.cdwriter.CdWriter._getDeviceId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceId CedarBackup2.writers.cdwriter.CdWriter.retrieveCapacity CedarBackup2.writers.cdwriter.CdWriter-class.html#retrieveCapacity CedarBackup2.writers.cdwriter.CdWriter._getBoundaries CedarBackup2.writers.cdwriter.CdWriter-class.html#_getBoundaries CedarBackup2.writers.cdwriter.CdWriter._buildBoundariesArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildBoundariesArgs CedarBackup2.writers.cdwriter.CdWriter._parsePropertiesOutput CedarBackup2.writers.cdwriter.CdWriter-class.html#_parsePropertiesOutput CedarBackup2.writers.cdwriter.CdWriter.ejectDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#ejectDelay CedarBackup2.writers.cdwriter.MediaCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html CedarBackup2.writers.cdwriter.MediaCapacity._getBytesUsed CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup2.writers.cdwriter.MediaCapacity.bytesUsed CedarBackup2.writers.cdwriter.MediaCapacity-class.html#bytesUsed CedarBackup2.writers.cdwriter.MediaCapacity.bytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity.__str__ CedarBackup2.writers.cdwriter.MediaCapacity-class.html#__str__ CedarBackup2.writers.cdwriter.MediaCapacity.utilized CedarBackup2.writers.cdwriter.MediaCapacity-class.html#utilized CedarBackup2.writers.cdwriter.MediaCapacity.__init__ CedarBackup2.writers.cdwriter.MediaCapacity-class.html#__init__ CedarBackup2.writers.cdwriter.MediaCapacity._getTotalCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup2.writers.cdwriter.MediaCapacity.boundaries CedarBackup2.writers.cdwriter.MediaCapacity-class.html#boundaries CedarBackup2.writers.cdwriter.MediaCapacity._getUtilized CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getUtilized CedarBackup2.writers.cdwriter.MediaCapacity._getBytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity.totalCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html#totalCapacity CedarBackup2.writers.cdwriter.MediaCapacity._getBoundaries CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBoundaries CedarBackup2.writers.cdwriter.MediaDefinition CedarBackup2.writers.cdwriter.MediaDefinition-class.html CedarBackup2.writers.cdwriter.MediaDefinition.initialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#initialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition.rewritable CedarBackup2.writers.cdwriter.MediaDefinition-class.html#rewritable CedarBackup2.writers.cdwriter.MediaDefinition.__init__ CedarBackup2.writers.cdwriter.MediaDefinition-class.html#__init__ CedarBackup2.writers.cdwriter.MediaDefinition.capacity CedarBackup2.writers.cdwriter.MediaDefinition-class.html#capacity CedarBackup2.writers.cdwriter.MediaDefinition.leadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#leadIn CedarBackup2.writers.cdwriter.MediaDefinition.mediaType CedarBackup2.writers.cdwriter.MediaDefinition-class.html#mediaType CedarBackup2.writers.cdwriter.MediaDefinition._setValues CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_setValues CedarBackup2.writers.cdwriter.MediaDefinition._getMediaType CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getMediaType CedarBackup2.writers.cdwriter.MediaDefinition._getInitialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getInitialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition._getLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getLeadIn CedarBackup2.writers.cdwriter.MediaDefinition._getCapacity CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getCapacity CedarBackup2.writers.cdwriter.MediaDefinition._getRewritable CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getRewritable CedarBackup2.writers.cdwriter._ImageProperties CedarBackup2.writers.cdwriter._ImageProperties-class.html CedarBackup2.writers.cdwriter._ImageProperties.__init__ CedarBackup2.writers.cdwriter._ImageProperties-class.html#__init__ CedarBackup2.writers.dvdwriter.DvdWriter CedarBackup2.writers.dvdwriter.DvdWriter-class.html CedarBackup2.writers.dvdwriter.DvdWriter._buildWriteArgs CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_buildWriteArgs CedarBackup2.writers.dvdwriter.DvdWriter.refreshMedia CedarBackup2.writers.dvdwriter.DvdWriter-class.html#refreshMedia CedarBackup2.writers.dvdwriter.DvdWriter.writeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#writeImage CedarBackup2.writers.dvdwriter.DvdWriter.deviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#deviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter.openTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#openTray CedarBackup2.writers.dvdwriter.DvdWriter.addImageEntry CedarBackup2.writers.dvdwriter.DvdWriter-class.html#addImageEntry CedarBackup2.writers.dvdwriter.DvdWriter.unlockTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#unlockTray CedarBackup2.writers.dvdwriter.DvdWriter._getHardwareId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getHardwareId CedarBackup2.writers.dvdwriter.DvdWriter.closeTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#closeTray CedarBackup2.writers.dvdwriter.DvdWriter.initializeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#initializeImage CedarBackup2.writers.dvdwriter.DvdWriter.deviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter-class.html#deviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter.__init__ CedarBackup2.writers.dvdwriter.DvdWriter-class.html#__init__ CedarBackup2.writers.dvdwriter.DvdWriter.refreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#refreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter._getDeviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDeviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter.getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter-class.html#getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter.media CedarBackup2.writers.dvdwriter.DvdWriter-class.html#media CedarBackup2.writers.dvdwriter.DvdWriter._parseSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_parseSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter.hardwareId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#hardwareId CedarBackup2.writers.dvdwriter.DvdWriter._getDeviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDeviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter._getMedia CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getMedia CedarBackup2.writers.dvdwriter.DvdWriter.isRewritable CedarBackup2.writers.dvdwriter.DvdWriter-class.html#isRewritable CedarBackup2.writers.dvdwriter.DvdWriter.setImageNewDisc CedarBackup2.writers.dvdwriter.DvdWriter-class.html#setImageNewDisc CedarBackup2.writers.dvdwriter.DvdWriter.driveSpeed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#driveSpeed CedarBackup2.writers.dvdwriter.DvdWriter._getDevice CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDevice CedarBackup2.writers.dvdwriter.DvdWriter._getScsiId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getScsiId CedarBackup2.writers.dvdwriter.DvdWriter._getDriveSpeed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDriveSpeed CedarBackup2.writers.dvdwriter.DvdWriter._writeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_writeImage CedarBackup2.writers.dvdwriter.DvdWriter.ejectDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#ejectDelay CedarBackup2.writers.dvdwriter.DvdWriter._searchForOverburn CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_searchForOverburn CedarBackup2.writers.dvdwriter.DvdWriter.device CedarBackup2.writers.dvdwriter.DvdWriter-class.html#device CedarBackup2.writers.dvdwriter.DvdWriter._getEjectDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getEjectDelay CedarBackup2.writers.dvdwriter.DvdWriter._getRefreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getRefreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter.scsiId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#scsiId CedarBackup2.writers.dvdwriter.DvdWriter.retrieveCapacity CedarBackup2.writers.dvdwriter.DvdWriter-class.html#retrieveCapacity CedarBackup2.writers.dvdwriter.DvdWriter._retrieveSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_retrieveSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter._getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getEstimatedImageSize CedarBackup2.writers.dvdwriter.MediaCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html CedarBackup2.writers.dvdwriter.MediaCapacity._getBytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity.bytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#bytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity.bytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity.__str__ CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#__str__ CedarBackup2.writers.dvdwriter.MediaCapacity.utilized CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#utilized CedarBackup2.writers.dvdwriter.MediaCapacity.__init__ CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#__init__ CedarBackup2.writers.dvdwriter.MediaCapacity._getTotalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity._getUtilized CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getUtilized CedarBackup2.writers.dvdwriter.MediaCapacity._getBytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity.totalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#totalCapacity CedarBackup2.writers.dvdwriter.MediaDefinition CedarBackup2.writers.dvdwriter.MediaDefinition-class.html CedarBackup2.writers.dvdwriter.MediaDefinition.capacity CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#capacity CedarBackup2.writers.dvdwriter.MediaDefinition.mediaType CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#mediaType CedarBackup2.writers.dvdwriter.MediaDefinition._setValues CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_setValues CedarBackup2.writers.dvdwriter.MediaDefinition._getMediaType CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getMediaType CedarBackup2.writers.dvdwriter.MediaDefinition._getRewritable CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getRewritable CedarBackup2.writers.dvdwriter.MediaDefinition.rewritable CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#rewritable CedarBackup2.writers.dvdwriter.MediaDefinition.__init__ CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#__init__ CedarBackup2.writers.dvdwriter.MediaDefinition._getCapacity CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getCapacity CedarBackup2.writers.dvdwriter._ImageProperties CedarBackup2.writers.dvdwriter._ImageProperties-class.html CedarBackup2.writers.dvdwriter._ImageProperties.__init__ CedarBackup2.writers.dvdwriter._ImageProperties-class.html#__init__ CedarBackup2.writers.util.IsoImage CedarBackup2.writers.util.IsoImage-class.html CedarBackup2.writers.util.IsoImage.preparerId CedarBackup2.writers.util.IsoImage-class.html#preparerId CedarBackup2.writers.util.IsoImage._buildWriteArgs CedarBackup2.writers.util.IsoImage-class.html#_buildWriteArgs CedarBackup2.writers.util.IsoImage.writeImage CedarBackup2.writers.util.IsoImage-class.html#writeImage CedarBackup2.writers.util.IsoImage._setVolumeId CedarBackup2.writers.util.IsoImage-class.html#_setVolumeId CedarBackup2.writers.util.IsoImage._setBiblioFile CedarBackup2.writers.util.IsoImage-class.html#_setBiblioFile CedarBackup2.writers.util.IsoImage._setDevice CedarBackup2.writers.util.IsoImage-class.html#_setDevice CedarBackup2.writers.util.IsoImage.getEstimatedSize CedarBackup2.writers.util.IsoImage-class.html#getEstimatedSize CedarBackup2.writers.util.IsoImage._getGraftPoint CedarBackup2.writers.util.IsoImage-class.html#_getGraftPoint CedarBackup2.writers.util.IsoImage._setUseRockRidge CedarBackup2.writers.util.IsoImage-class.html#_setUseRockRidge CedarBackup2.writers.util.IsoImage.addEntry CedarBackup2.writers.util.IsoImage-class.html#addEntry CedarBackup2.writers.util.IsoImage.graftPoint CedarBackup2.writers.util.IsoImage-class.html#graftPoint CedarBackup2.writers.util.IsoImage.applicationId CedarBackup2.writers.util.IsoImage-class.html#applicationId CedarBackup2.writers.util.IsoImage.__init__ CedarBackup2.writers.util.IsoImage-class.html#__init__ CedarBackup2.writers.util.IsoImage.biblioFile CedarBackup2.writers.util.IsoImage-class.html#biblioFile CedarBackup2.writers.util.IsoImage._buildGeneralArgs CedarBackup2.writers.util.IsoImage-class.html#_buildGeneralArgs CedarBackup2.writers.util.IsoImage._getUseRockRidge CedarBackup2.writers.util.IsoImage-class.html#_getUseRockRidge CedarBackup2.writers.util.IsoImage._getPublisherId CedarBackup2.writers.util.IsoImage-class.html#_getPublisherId CedarBackup2.writers.util.IsoImage._getEstimatedSize CedarBackup2.writers.util.IsoImage-class.html#_getEstimatedSize CedarBackup2.writers.util.IsoImage._setPreparerId CedarBackup2.writers.util.IsoImage-class.html#_setPreparerId CedarBackup2.writers.util.IsoImage.boundaries CedarBackup2.writers.util.IsoImage-class.html#boundaries CedarBackup2.writers.util.IsoImage._getDevice CedarBackup2.writers.util.IsoImage-class.html#_getDevice CedarBackup2.writers.util.IsoImage._getApplicationId CedarBackup2.writers.util.IsoImage-class.html#_getApplicationId CedarBackup2.writers.util.IsoImage._setBoundaries CedarBackup2.writers.util.IsoImage-class.html#_setBoundaries CedarBackup2.writers.util.IsoImage.volumeId CedarBackup2.writers.util.IsoImage-class.html#volumeId CedarBackup2.writers.util.IsoImage._buildDirEntries CedarBackup2.writers.util.IsoImage-class.html#_buildDirEntries CedarBackup2.writers.util.IsoImage._setPublisherId CedarBackup2.writers.util.IsoImage-class.html#_setPublisherId CedarBackup2.writers.util.IsoImage.device CedarBackup2.writers.util.IsoImage-class.html#device CedarBackup2.writers.util.IsoImage._setGraftPoint CedarBackup2.writers.util.IsoImage-class.html#_setGraftPoint CedarBackup2.writers.util.IsoImage._setApplicationId CedarBackup2.writers.util.IsoImage-class.html#_setApplicationId CedarBackup2.writers.util.IsoImage._buildSizeArgs CedarBackup2.writers.util.IsoImage-class.html#_buildSizeArgs CedarBackup2.writers.util.IsoImage._getVolumeId CedarBackup2.writers.util.IsoImage-class.html#_getVolumeId CedarBackup2.writers.util.IsoImage.publisherId CedarBackup2.writers.util.IsoImage-class.html#publisherId CedarBackup2.writers.util.IsoImage._getBoundaries CedarBackup2.writers.util.IsoImage-class.html#_getBoundaries CedarBackup2.writers.util.IsoImage._getPreparerId CedarBackup2.writers.util.IsoImage-class.html#_getPreparerId CedarBackup2.writers.util.IsoImage.useRockRidge CedarBackup2.writers.util.IsoImage-class.html#useRockRidge CedarBackup2.writers.util.IsoImage._getBiblioFile CedarBackup2.writers.util.IsoImage-class.html#_getBiblioFile CedarBackup2.xmlutil.Serializer CedarBackup2.xmlutil.Serializer-class.html CedarBackup2.xmlutil.Serializer._visitNodeList CedarBackup2.xmlutil.Serializer-class.html#_visitNodeList CedarBackup2.xmlutil.Serializer.serialize CedarBackup2.xmlutil.Serializer-class.html#serialize CedarBackup2.xmlutil.Serializer._visitEntityReference CedarBackup2.xmlutil.Serializer-class.html#_visitEntityReference CedarBackup2.xmlutil.Serializer._visitDocumentFragment CedarBackup2.xmlutil.Serializer-class.html#_visitDocumentFragment CedarBackup2.xmlutil.Serializer._visitElement CedarBackup2.xmlutil.Serializer-class.html#_visitElement CedarBackup2.xmlutil.Serializer.__init__ CedarBackup2.xmlutil.Serializer-class.html#__init__ CedarBackup2.xmlutil.Serializer._visitCDATASection CedarBackup2.xmlutil.Serializer-class.html#_visitCDATASection CedarBackup2.xmlutil.Serializer._visitDocumentType CedarBackup2.xmlutil.Serializer-class.html#_visitDocumentType CedarBackup2.xmlutil.Serializer._visitNamedNodeMap CedarBackup2.xmlutil.Serializer-class.html#_visitNamedNodeMap CedarBackup2.xmlutil.Serializer._visitAttr CedarBackup2.xmlutil.Serializer-class.html#_visitAttr CedarBackup2.xmlutil.Serializer._visitProlog CedarBackup2.xmlutil.Serializer-class.html#_visitProlog CedarBackup2.xmlutil.Serializer._tryIndent CedarBackup2.xmlutil.Serializer-class.html#_tryIndent CedarBackup2.xmlutil.Serializer._visitDocument CedarBackup2.xmlutil.Serializer-class.html#_visitDocument CedarBackup2.xmlutil.Serializer._visitNotation CedarBackup2.xmlutil.Serializer-class.html#_visitNotation CedarBackup2.xmlutil.Serializer._visitEntity CedarBackup2.xmlutil.Serializer-class.html#_visitEntity CedarBackup2.xmlutil.Serializer._write CedarBackup2.xmlutil.Serializer-class.html#_write CedarBackup2.xmlutil.Serializer._visitProcessingInstruction CedarBackup2.xmlutil.Serializer-class.html#_visitProcessingInstruction CedarBackup2.xmlutil.Serializer._visitComment CedarBackup2.xmlutil.Serializer-class.html#_visitComment CedarBackup2.xmlutil.Serializer._visit CedarBackup2.xmlutil.Serializer-class.html#_visit CedarBackup2.xmlutil.Serializer._visitText CedarBackup2.xmlutil.Serializer-class.html#_visitText CedarBackup2-2.22.0/doc/interface/CedarBackup2.writers.util-module.html0000664000175000017500000004267412143054362027340 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util
    Package CedarBackup2 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides utilities related to image writers.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      IsoImage
    Represents an ISO filesystem image.
    Functions [hide private]
     
    validateDevice(device, unittest=False)
    Validates a configured device.
    source code
     
    validateScsiId(scsiId)
    Validates a SCSI id string.
    source code
     
    validateDriveSpeed(driveSpeed)
    Validates a drive speed value.
    source code
     
    readMediaLabel(devicePath)
    Reads the media label (volume name) from the indicated device.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.writers.util")
      MKISOFS_COMMAND = ['mkisofs']
      VOLNAME_COMMAND = ['volname']
      __package__ = 'CedarBackup2.writers'
    Function Details [hide private]

    validateDevice(device, unittest=False)

    source code 

    Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk.

    Parameters:
    • device - Filesystem device path.
    • unittest - Indicates whether we're unit testing.
    Returns:
    Device as a string, for instance "/dev/cdrw"
    Raises:
    • ValueError - If the device value is invalid.
    • ValueError - If some path cannot be encoded properly.

    validateScsiId(scsiId)

    source code 

    Validates a SCSI id string. SCSI id must be a string in the form [<method>:]scsibus,target,lun. For Mac OS X (Darwin), we also accept the form IO.*Services[/N].

    Parameters:
    • scsiId - SCSI id for the device.
    Returns:
    SCSI id as a string, for instance "ATA:1,0,0"
    Raises:
    • ValueError - If the SCSI id string is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    validateDriveSpeed(driveSpeed)

    source code 

    Validates a drive speed value. Drive speed must be an integer which is >= 1.

    Parameters:
    • driveSpeed - Speed at which the drive writes.
    Returns:
    Drive speed as an integer
    Raises:
    • ValueError - If the drive speed value is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    readMediaLabel(devicePath)

    source code 

    Reads the media label (volume name) from the indicated device. The volume name is read using the volname command.

    Parameters:
    • devicePath - Device path to read from
    Returns:
    Media label as a string, or None if there is no name or it could not be read.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.CollectConfig-class.html0000664000175000017500000014171112143054362030454 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectConfig
    Package CedarBackup2 :: Module config :: Class CollectConfig
    [hide private]
    [frames] | no frames]

    Class CollectConfig

    source code

    object --+
             |
            CollectConfig
    

    Class representing a Cedar Backup collect configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.
    • Each of the paths in absoluteExcludePaths must be an absolute path
    • The collect file list must be a list of CollectFile objects.
    • The collect directory list must be a list of CollectDir objects.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.

    For the collectFiles and collectDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has an appropriate type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    Constructor for the CollectConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setCollectFiles(self, value)
    Property target used to set the collect files list.
    source code
     
    _getCollectFiles(self)
    Property target used to get the collect files list.
    source code
     
    _setCollectDirs(self, value)
    Property target used to set the collect dirs list.
    source code
     
    _getCollectDirs(self)
    Property target used to get the collect dirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to collect files into.
      collectMode
    Default collect mode.
      archiveMode
    Default archive mode for collect files.
      ignoreFile
    Default ignore file name.
      absoluteExcludePaths
    List of absolute paths to exclude.
      excludePatterns
    List of regular expressions patterns to exclude.
      collectFiles
    List of collect files.
      collectDirs
    List of collect directories.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    (Constructor)

    source code 

    Constructor for the CollectConfig class.

    Parameters:
    • targetDir - Directory to collect files into.
    • collectMode - Default collect mode.
    • archiveMode - Default archive mode for collect files.
    • ignoreFile - Default ignore file name.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    • collectFiles - List of collect files.
    • collectDirs - List of collect directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value cannot be encoded properly.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setCollectFiles(self, value)

    source code 

    Property target used to set the collect files list. Either the value must be None or each element must be a CollectFile.

    Raises:
    • ValueError - If the value is not a CollectFile

    _setCollectDirs(self, value)

    source code 

    Property target used to set the collect dirs list. Either the value must be None or each element must be a CollectDir.

    Raises:
    • ValueError - If the value is not a CollectDir

    Property Details [hide private]

    targetDir

    Directory to collect files into.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Default archive mode for collect files.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Default ignore file name.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    excludePatterns

    List of regular expressions patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    collectFiles

    List of collect files.

    Get Method:
    _getCollectFiles(self) - Property target used to get the collect files list.
    Set Method:
    _setCollectFiles(self, value) - Property target used to set the collect files list.

    collectDirs

    List of collect directories.

    Get Method:
    _getCollectDirs(self) - Property target used to get the collect dirs list.
    Set Method:
    _setCollectDirs(self, value) - Property target used to set the collect dirs list.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.ByteQuantity-class.html0000664000175000017500000006432012143054362030403 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ByteQuantity
    Package CedarBackup2 :: Module config :: Class ByteQuantity
    [hide private]
    [frames] | no frames]

    Class ByteQuantity

    source code

    object --+
             |
            ByteQuantity
    

    Class representing a byte quantity.

    A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py.

    The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None, units=None)
    Constructor for the ByteQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be a non-empty string if it is not None.
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _setUnits(self, value)
    Property target used to set the units value.
    source code
     
    _getUnits(self)
    Property target used to get the units value.
    source code
     
    _getBytes(self)
    Property target used to return the byte quantity as a floating point number.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Byte quantity, as a string
      units
    Units for byte quantity, for instance UNIT_BYTES
      bytes
    Byte quantity, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None, units=None)
    (Constructor)

    source code 

    Constructor for the ByteQuantity class.

    Parameters:
    • quantity - Quantity of bytes, as string ("1.25")
    • units - Unit of bytes, one of VALID_BYTE_UNITS
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _setUnits(self, value)

    source code 

    Property target used to set the units value. If not None, the units value must be one of the values in VALID_BYTE_UNITS.

    Raises:
    • ValueError - If the value is not valid.

    _getBytes(self)

    source code 

    Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Byte quantity, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be a non-empty string if it is not None.

    units

    Units for byte quantity, for instance UNIT_BYTES

    Get Method:
    _getUnits(self) - Property target used to get the units value.
    Set Method:
    _setUnits(self, value) - Property target used to set the units value.

    bytes

    Byte quantity, as a floating point number.

    Get Method:
    _getBytes(self) - Property target used to return the byte quantity as a floating point number.

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.util.Pipe-class.html0000664000175000017500000002475412143054363026356 0ustar pronovicpronovic00000000000000 CedarBackup2.util.Pipe
    Package CedarBackup2 :: Module util :: Class Pipe
    [hide private]
    [frames] | no frames]

    Class Pipe

    source code

          object --+    
                   |    
    subprocess.Popen --+
                       |
                      Pipe
    

    Specialized pipe class for use by executeCommand.

    The executeCommand function needs a specialized way of interacting with a pipe. First, executeCommand only reads from the pipe, and never writes to it. Second, executeCommand needs a way to discard all output written to stderr, as a means of simulating the shell 2>/dev/null construct.

    Instance Methods [hide private]
     
    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    Create new Popen instance.
    source code

    Inherited from subprocess.Popen: __del__, communicate, kill, pipe_cloexec, poll, send_signal, terminate, wait

    Inherited from subprocess.Popen (private): _close_fds, _communicate, _communicate_with_poll, _communicate_with_select, _execute_child, _find_w9xpopen, _get_handles, _handle_exitstatus, _internal_poll, _make_inheritable, _readerthread, _set_cloexec_flag, _translate_newlines

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    (Constructor)

    source code 

    Create new Popen instance.

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.22.0/doc/interface/CedarBackup2.config.BlankBehavior-class.html0000664000175000017500000005611612143054362030454 0ustar pronovicpronovic00000000000000 CedarBackup2.config.BlankBehavior
    Package CedarBackup2 :: Module config :: Class BlankBehavior
    [hide private]
    [frames] | no frames]

    Class BlankBehavior

    source code

    object --+
             |
            BlankBehavior
    

    Class representing optimized store-action media blanking behavior.

    The following restrictions exist on data in this class:

    • The blanking mode must be a one of the values in VALID_BLANK_MODES
    • The blanking factor must be a positive floating point number
    Instance Methods [hide private]
     
    __init__(self, blankMode=None, blankFactor=None)
    Constructor for the BlankBehavior class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setBlankMode(self, value)
    Property target used to set the blanking mode.
    source code
     
    _getBlankMode(self)
    Property target used to get the blanking mode.
    source code
     
    _setBlankFactor(self, value)
    Property target used to set the blanking factor.
    source code
     
    _getBlankFactor(self)
    Property target used to get the blanking factor.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      blankMode
    Blanking mode
      blankFactor
    Blanking factor

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, blankMode=None, blankFactor=None)
    (Constructor)

    source code 

    Constructor for the BlankBehavior class.

    Parameters:
    • blankMode - Blanking mode
    • blankFactor - Blanking factor
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBlankMode(self, value)

    source code 

    Property target used to set the blanking mode. The value must be one of VALID_BLANK_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setBlankFactor(self, value)

    source code 

    Property target used to set the blanking factor. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    Property Details [hide private]

    blankMode

    Blanking mode

    Get Method:
    _getBlankMode(self) - Property target used to get the blanking mode.
    Set Method:
    _setBlankMode(self, value) - Property target used to set the blanking mode.

    blankFactor

    Blanking factor

    Get Method:
    _getBlankFactor(self) - Property target used to get the blanking factor.
    Set Method:
    _setBlankFactor(self, value) - Property target used to set the blanking factor.

    CedarBackup2-2.22.0/doc/release.txt0000664000175000017500000001147111415156556020510 0ustar pronovicpronovic00000000000000I am pleased to announce the release of Cedar Backup v2.0. This release has been more than a year in the works. During this time, the main focus was to clean up the codebase and the documentation, making the whole project easier to read, maintain, debug and enhance. Another major priority was validation, and the new implementation relies heavily on automated regression testing. Existing enhancement requests took a back seat to this cleanup effort, but are planned for future releases. The old v1.0 code tree will still be maintained for security support and major bug fixes, but all new development will take place on the v2.0 code tree. The new Debian package is called cedar-backup2 rather than cedar-backup. The old and new packages cannot be installed at the same time, but you can fall back to your existing cedar-backup package if you have problems with the new cedar-backup2 package. This should be considered a high-quality beta release. It has been through testing on my personal systems (all running various Debian releases), but could still harbour unknown bugs. If you have time, please report back to the cedar-backup-users mailing list about your experience with this new version, good or bad. DOWNLOAD Information about how to download Cedar Backup can be found on the Cedar Solutions website: http://cedar-solutions.com/software/cedar-backup Cedar Solutions provides binary packages for Debian 'sarge' and 'woody'; and source packages for other Linux platforms. DOCUMENTATION The newly-rewritten Cedar Backup Software Manual can be found on the Cedar Solutions website: Single-page HTML: http://cedar-solutions.com/cedar-backup/manual/manual.html Multiple-page HTML: http://cedar-solutions.com/cedar-backup/manual/index.html Portable Document Format (PDF): http://cedar-solutions.com/cedar-backup/manual/manual.pdf Plaintext: http://cedar-solutions.com/cedar-backup/manual/manual.txt Most users will want to look at the multiple-page HTML version. Users who wish to print the software manual should use the PDF version. MAJOR IMPROVEMENTS IN THIS RELEASE The v2.0 release represents a ground-up rewrite of the Cedar Backup codebase using Python 2.3. The following is a partial list of major changes, enhancements and improvements: - Code is better structured, with a sensible mix of classes and functions. - Documentation has been completely rewritten from scratch in DocBook Lite. - Unicode filenames are now natively supported without Python site changes. - The runtime 'validate' action now checks for many more config problems. - There are no longer any restrictions related to backups spanning midnite. - Most lower-level code is intended to be general-purpose "library" code. - Configuration is standardized in a common class, so 3rd parties can use it. - Collect and stage configuration now support various additional options. - Package now supports 3rd-party backup actions via an extension mechanism. - Most library code is thoroughly tested via pyunit (1700+ individual tests). - Code structure allows for easy addition of other backup types (i.e. DVD). - Code now uses Python's integrated logging module, resulting in realtime logs. - Collect action uses Python's tar module rather than shelling out to GNU tar. - Internal use of pipes should now be more robust and less prone to problems. USER-VISIBLE CHANGES IN THIS RELEASE Cedar Backup v2.0 requires Python 2.3 or better. Cedar Backup v1.0 only required Python 2.2. Cedar Backup configuration files that were valid for the v1.0 release should still be valid for the v2.0 release, with one exception: the tarz (.tar.Z) backup format is no longer supported. This because the Python tar module does not support this format. If there is sufficient interest, this backup format could be added again via shelling to an external compress program. The Cedar Backup command-line interface has changed slightly, but the changes should not present a problem for most users. In Cedar Backup v1.0, backup actions (collect, stage, store, purge) were specified on the command line with switches, i.e. --collect. This is not considered a good practice, so v2.0 instead accepts actions as plain arguments specified after all switches. For instance, the v1.0 command "cback --full --collect" is coverted to "cback --full collect" in v2.0. WHAT IS CEDAR BACKUP? Cedar Backup is a Python package that supports secure backups of files on local and remote hosts to CD-R or CD-RW media. The package is focused around weekly backups to a single disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, the script can write multisession discs, allowing you to add to a disc in a daily fashion. Directories are backed up using tar and may be compressed using gzip or bzip2. CedarBackup2-2.22.0/doc/manual/0002775000175000017500000000000012143054372017572 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/doc/manual/ch05s04.html0000664000175000017500000004546112143054371021555 0ustar pronovicpronovic00000000000000Setting up a Client Peer Node

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [26]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup2-2.22.0/doc/manual/ch05s02.html0000664000175000017500000034573712143054371021564 0ustar pronovicpronovic00000000000000Configuration File Format

    Configuration File Format

    Cedar Backup is configured through an XML [22] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [23] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes a stripped config file in /etc/cback.conf and a larger sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    CedarBackup2-2.22.0/doc/manual/ch02s04.html0000664000175000017500000005123312143054371021544 0ustar pronovicpronovic00000000000000The Backup Process

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [10] or specify absolute paths or filename patterns [11] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [12]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.



    [10] Analagous to .cvsignore in CVS

    [11] In terms of Python regular expressions

    [12] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    CedarBackup2-2.22.0/doc/manual/ch02s07.html0000664000175000017500000001345012143054371021546 0ustar pronovicpronovic00000000000000Media and Device Types

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [13]

    When using a new enough backup device, a new multisession ISO image [14] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.



    [13] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [14] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    CedarBackup2-2.22.0/doc/manual/ch03s02.html0000664000175000017500000001223712143054371021544 0ustar pronovicpronovic00000000000000Installing on a Debian System

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup.) Otherwise, you need to install from the Cedar Solutions APT data source. To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. [17]

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup2 cedar-backup2-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. [18]

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    CedarBackup2-2.22.0/doc/manual/pr01s03.html0000664000175000017500000000656012143054371021574 0ustar pronovicpronovic00000000000000Conventions Used in This Book

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    CedarBackup2-2.22.0/doc/manual/ch06s04.html0000664000175000017500000002273012143054371021550 0ustar pronovicpronovic00000000000000PostgreSQL Extension

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [31] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup2.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup2-2.22.0/doc/manual/ch04s02.html0000664000175000017500000003234012143054371021542 0ustar pronovicpronovic00000000000000The cback command

    The cback command

    Introduction

    Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process.

    Syntax

    The cback command has the following syntax:

     Usage: cback [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [21]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.



    [21] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    CedarBackup2-2.22.0/doc/manual/ch02s03.html0000664000175000017500000000474712143054371021553 0ustar pronovicpronovic00000000000000Cedar Backup Pools

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    CedarBackup2-2.22.0/doc/manual/ch05s07.html0000664000175000017500000001315512143054371021553 0ustar pronovicpronovic00000000000000Optimized Blanking Stategy

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.

    CedarBackup2-2.22.0/doc/manual/apa.html0000664000175000017500000002200212143054371021212 0ustar pronovicpronovic00000000000000AppendixA.Extension Architecture Interface

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    CedarBackup2-2.22.0/doc/manual/ch06s06.html0000664000175000017500000001722212143054371021552 0ustar pronovicpronovic00000000000000Encrypt Extension

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup2.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    CedarBackup2-2.22.0/doc/manual/pr01s04.html0000664000175000017500000001260612143054371021573 0ustar pronovicpronovic00000000000000Organization of This Manual

    Organization of This Manual

    Chapter1, Introduction

    Provides some background about how Cedar Backup came to be, its history, some general information about what needs it is intended to meet, etc.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    CedarBackup2-2.22.0/doc/manual/apd.html0000664000175000017500000002544112143054371021227 0ustar pronovicpronovic00000000000000AppendixD.Securing Password-less SSH Connections

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback --full collect
    /usr/bin/cback collect
          

    Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    CedarBackup2-2.22.0/doc/manual/styles.css0000664000175000017500000000675112143054371021635 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 2 * Revision : $Id: styles.css 245 2005-01-28 23:41:19Z pronovic $ * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup2-2.22.0/doc/manual/apb.html0000664000175000017500000004452612143054371021232 0ustar pronovicpronovic00000000000000AppendixB.Dependencies

    AppendixB.Dependencies

    Python 2.5

    Version 2.5 of the Python interpreter was released on 19 Sep 2006, so most current Linux and BSD distributions should include it.

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    If you can't find a package for your system, install from the package source, using the upstream link.

    I have classified Gentoo as unknown because I can't find a specific package for that platform. I think that maybe mkisofs is part of the cdrtools package (see below), but I'm not sure. Any Gentoo users want to enlighten me?

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    I have classified Gentoo as unknown because I can't find a specific package for that platform. It may just be that these two utilities are considered standard, and don't have an independent package of their own. Any Gentoo users want to enlighten me?

    I have classified Mac OS X built-in because that operating system does contain a mount command. However, it isn't really compatible with Cedar Backup's idea of mount, and in fact what Cedar Backup needs is closer to the hdiutil command. However, there are other issues related to that command, which is why the store action is not really supported on Mac OS X.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    CedarBackup2-2.22.0/doc/manual/ch05s05.html0000664000175000017500000005643612143054371021562 0ustar pronovicpronovic00000000000000Setting up a Master Peer Node

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [25] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 02 * * * root  cback stage
    30 04 * * * root  cback store
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [26]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup2-2.22.0/doc/manual/ch06s08.html0000664000175000017500000001220312143054371021546 0ustar pronovicpronovic00000000000000Capacity Extension

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup2.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    CedarBackup2-2.22.0/doc/manual/ch06s07.html0000664000175000017500000001443512143054371021556 0ustar pronovicpronovic00000000000000Split Extension

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup2.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    CedarBackup2-2.22.0/doc/manual/ch02s06.html0000664000175000017500000000700712143054371021546 0ustar pronovicpronovic00000000000000Managed Backups

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available (for instance, SourceForge shell accounts).

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    CedarBackup2-2.22.0/doc/manual/ch05.html0000664000175000017500000002530412143054371021220 0ustar pronovicpronovic00000000000000Chapter5.Configuration

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    CedarBackup2-2.22.0/doc/manual/ch01.html0000664000175000017500000001247512143054371021221 0ustar pronovicpronovic00000000000000Chapter1.Introduction

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language.

    There are many different backup software implementations out there in the free software and open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data to CD or DVD on a regular basis. Cedar Backup isn't for you if you want to back up your MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, a CVS or Subversion repository, or a small MySQL database, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided in the section called “Installing Dependencies”.

    CedarBackup2-2.22.0/doc/manual/ch06s02.html0000664000175000017500000003724712143054371021557 0ustar pronovicpronovic00000000000000Subversion Extension

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [28] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. [29]

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup2.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup2-2.22.0/doc/manual/ch03s03.html0000664000175000017500000002131212143054371021537 0ustar pronovicpronovic00000000000000Installing from Source

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [19] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python and requires version 2.5 or greater of the language. Python 2.5 was released on 19 Sep 2006, so by now most current Linux and BSD distributions should include it. You must install Python on every peer node in a pool (master or client).

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [18] untar it:

    $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup2-2.0.0
    $ python setup.py install
             

    Make sure that you are using Python 2.5 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [20] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python setup.py --help
    $ python setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    CedarBackup2-2.22.0/doc/manual/ch02s09.html0000664000175000017500000001044412143054371021550 0ustar pronovicpronovic00000000000000Extensions

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup 2.0, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2.0, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process, (i.e. not collect, stage, store or purge) but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action.

    Hopefully, as the Cedar Backup 2.0 user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.

    CedarBackup2-2.22.0/doc/manual/apcs02.html0000664000175000017500000002502112143054371021545 0ustar pronovicpronovic00000000000000Recovering Filesystem Data

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    CedarBackup2-2.22.0/doc/manual/ch01s03.html0000664000175000017500000001671412143054371021547 0ustar pronovicpronovic00000000000000History

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [4] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [5] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. Since then, we have continued to use Cedar Backup for those sites, and Cedar Backup has picked up a handful of other users who have occasionally reported bugs or requested minor enhancements.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [6] and updated the code to use the newly-released Python logging package [7] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result is the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [8]



    [5] Debian's stable releases are named after characters in the Toy Story movie.

    [6] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [8] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    CedarBackup2-2.22.0/doc/manual/pr01s05.html0000664000175000017500000000612512143054371021573 0ustar pronovicpronovic00000000000000Acknowledgments

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Many thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    There are not very many Cedar Backup users today, but almost all of them have contributed in some way to the documentation in this manual, either by asking questions, making suggestions or finding bugs. I'm glad to have them as users, and I hope that this new release meets their needs even better than the previous release.

    My wife Julie puts up with a lot. It's sometimes not easy to live with someone who hacks on open source code in his free time — even when you're a pretty good engineer yourself, like she is. First, she managed to live with a dual-boot Debian and Windoze machine; then she managed to get used to IceWM rather than a prettier desktop; and eventually she even managed to cope with vim when she needed to. Now, even after all that, she has graciously volunteered to edit this manual. I much appreciate her skill with a red pen.

    CedarBackup2-2.22.0/doc/manual/pr01s02.html0000664000175000017500000000366312143054371021574 0ustar pronovicpronovic00000000000000Audience

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    CedarBackup2-2.22.0/doc/manual/index.html0000664000175000017500000003645112143054371021575 0ustar pronovicpronovic00000000000000Cedar Backup Software Manual

    Cedar Backup Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback command
    Introduction
    Syntax
    Switches
    Actions
    The cback-span command
    Introduction
    Syntax
    Switches
    Using cback-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright
    CedarBackup2-2.22.0/doc/manual/ch04s03.html0000664000175000017500000004616512143054371021555 0ustar pronovicpronovic00000000000000The cback-span command

    The cback-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs.

    cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback-span command has the following syntax:

     Usage: cback-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback-span

    As discussed above, the cback-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             
    CedarBackup2-2.22.0/doc/manual/apcs05.html0000664000175000017500000001223412143054371021552 0ustar pronovicpronovic00000000000000Recovering Mailbox Data

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    CedarBackup2-2.22.0/doc/manual/ch06s05.html0000664000175000017500000003635512143054371021561 0ustar pronovicpronovic00000000000000Mbox Extension

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup2.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup2-2.22.0/doc/manual/ch02.html0000664000175000017500000001400512143054371021211 0ustar pronovicpronovic00000000000000Chapter2.Basic Concepts

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid[9] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    CedarBackup2-2.22.0/doc/manual/manual.html0000664000175000017500000162225112143054367021750 0ustar pronovicpronovic00000000000000Cedar Backup Software Manual

    Cedar Backup Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback command
    Introduction
    Syntax
    Switches
    Actions
    The cback-span command
    Introduction
    Syntax
    Switches
    Using cback-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright

    Preface

    Purpose

    This software manual has been written to document the 2.0 series of Cedar Backup, originally released in early 2005.

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    Organization of This Manual

    Chapter1, Introduction

    Provides some background about how Cedar Backup came to be, its history, some general information about what needs it is intended to meet, etc.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Many thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    There are not very many Cedar Backup users today, but almost all of them have contributed in some way to the documentation in this manual, either by asking questions, making suggestions or finding bugs. I'm glad to have them as users, and I hope that this new release meets their needs even better than the previous release.

    My wife Julie puts up with a lot. It's sometimes not easy to live with someone who hacks on open source code in his free time — even when you're a pretty good engineer yourself, like she is. First, she managed to live with a dual-boot Debian and Windoze machine; then she managed to get used to IceWM rather than a prettier desktop; and eventually she even managed to cope with vim when she needed to. Now, even after all that, she has graciously volunteered to edit this manual. I much appreciate her skill with a red pen.

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language.

    There are many different backup software implementations out there in the free software and open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data to CD or DVD on a regular basis. Cedar Backup isn't for you if you want to back up your MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, a CVS or Subversion repository, or a small MySQL database, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided in the section called “Installing Dependencies”.

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to write the Cedar Backup Users mailing list. [1] This is a public list for all Cedar Backup users. If you write to this list, you might get help from me, or from some other user who has experienced the same thing you have.

    If you know that the problem you have found constitutes a bug, or if you would like to make an enhancement request, then feel free to file a bug report in the Cedar Solutions Bug Tracking System. [2]

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me or to someone else who can help you. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [3]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [4] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [5] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. Since then, we have continued to use Cedar Backup for those sites, and Cedar Backup has picked up a handful of other users who have occasionally reported bugs or requested minor enhancements.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [6] and updated the code to use the newly-released Python logging package [7] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result is the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [8]



    [1] See SF Mailing Lists at http://cedar-backup.sourceforge.net/.

    [2] See SF Bug Tracking at http://cedar-backup.sourceforge.net/.

    [3] See Simon Tatham's excellent bug reporting tutorial: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html .

    [5] Debian's stable releases are named after characters in the Toy Story movie.

    [6] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [8] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid[9] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [10] or specify absolute paths or filename patterns [11] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [12]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available (for instance, SourceForge shell accounts).

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [13]

    When using a new enough backup device, a new multisession ISO image [14] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [15] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup 2.0, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2.0, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process, (i.e. not collect, stage, store or purge) but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action.

    Hopefully, as the Cedar Backup 2.0 user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.



    [10] Analagous to .cvsignore in CVS

    [11] In terms of Python regular expressions

    [12] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    [13] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [14] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    [15] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup.) Otherwise, you need to install from the Cedar Solutions APT data source. To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. [17]

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup2 cedar-backup2-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. [18]

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [19] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python and requires version 2.5 or greater of the language. Python 2.5 was released on 19 Sep 2006, so by now most current Linux and BSD distributions should include it. You must install Python on every peer node in a pool (master or client).

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [18] untar it:

    $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup2-2.0.0
    $ python setup.py install
             

    Make sure that you are using Python 2.5 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [20] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python setup.py --help
    $ python setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with two command-line programs, the cback and cback-span commands. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    Users that have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs.

    The cback command

    Introduction

    Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process.

    Syntax

    The cback command has the following syntax:

     Usage: cback [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [21]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.

    The cback-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs.

    cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback-span command has the following syntax:

     Usage: cback-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback-span

    As discussed above, the cback-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             


    [21] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    Configuration File Format

    Cedar Backup is configured through an XML [22] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [23] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes a stripped config file in /etc/cback.conf and a larger sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs son the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [25] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback all
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [26]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [25] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 02 * * * root  cback stage
    30 04 * * * root  cback store
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [26]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[27]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution.

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.



    [22] See http://www.xml.com/pub/a/98/10/guide0.html for a basic introduction to XML.

    [25] See SF Bug Tracking at http://cedar-backup.sourceforge.net/.

    [27] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup2.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [28] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. [29]

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup2.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [30] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup2.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [31] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup2.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup2.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup2.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup2.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup2.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    AppendixB.Dependencies

    Python 2.5

    Version 2.5 of the Python interpreter was released on 19 Sep 2006, so most current Linux and BSD distributions should include it.

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    If you can't find a package for your system, install from the package source, using the upstream link.

    I have classified Gentoo as unknown because I can't find a specific package for that platform. I think that maybe mkisofs is part of the cdrtools package (see below), but I'm not sure. Any Gentoo users want to enlighten me?

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    I have classified Gentoo as unknown because I can't find a specific package for that platform. It may just be that these two utilities are considered standard, and don't have an independent package of their own. Any Gentoo users want to enlighten me?

    I have classified Mac OS X built-in because that operating system does contain a mount command. However, it isn't really compatible with Cedar Backup's idea of mount, and in fact what Cedar Backup needs is closer to the hdiutil command. However, there are other issues related to that command, which is why the store action is not really supported on Mac OS X.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback --full collect
    /usr/bin/cback collect
          

    Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    AppendixE.Copyright

    
    Copyright (c) 2005-2010
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 59 Temple
    Place, Suite 330, Boston, MA 02111-1307 USA.
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup2-2.22.0/doc/manual/ch05s03.html0000664000175000017500000004373412143054371021555 0ustar pronovicpronovic00000000000000Setting up a Pool of One

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs son the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [25] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback all
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.



    [25] See SF Bug Tracking at http://cedar-backup.sourceforge.net/.

    CedarBackup2-2.22.0/doc/manual/manual.txt0000664000175000017500000102245712143054371021620 0ustar pronovicpronovic00000000000000Cedar Backup Software Manual Kenneth J. Pronovici Copyright 2005-2008 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. ------------------------------------------------------------------------------- Table of Contents Preface Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments 1. Introduction What is Cedar Backup? How to Get Support History 2. Basic Concepts General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions 3. Installation Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package 4. Command Line Tools Overview The cback command Introduction Syntax Switches Actions The cback-span command Introduction Syntax Switches Using cback-span Sample run 5. Configuration Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy 6. Official Extensions System Information Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension A. Extension Architecture Interface B. Dependencies C. Data Recovery Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension D. Securing Password-less SSH Connections E. Copyright Preface Table of Contents Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments Purpose This software manual has been written to document the 2.0 series of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons Note This icon designates a note relating to the surrounding text. Tip This icon designates a helpful tip relating to the surrounding text. Warning This icon designates a warning relating to the surrounding text. Organization of This Manual Chapter1, Introduction Provides some background about how Cedar Backup came to be, its history, some general information about what needs it is intended to meet, etc. Chapter2, Basic Concepts Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Chapter3, Installation Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Chapter4, Command Line Tools Discusses the various Cedar Backup command-line tools, including the primary cback command. Chapter5, Configuration Provides detailed information about how to configure Cedar Backup. Chapter6, Official Extensions Describes each of the officially-supported Cedar Backup extensions. AppendixA, Extension Architecture Interface Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. AppendixB, Dependencies Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. AppendixC, Data Recovery Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. AppendixD, Securing Password-less SSH Connections Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Many thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. There are not very many Cedar Backup users today, but almost all of them have contributed in some way to the documentation in this manual, either by asking questions, making suggestions or finding bugs. I'm glad to have them as users, and I hope that this new release meets their needs even better than the previous release. My wife Julie puts up with a lot. It's sometimes not easy to live with someone who hacks on open source code in his free time ? even when you're a pretty good engineer yourself, like she is. First, she managed to live with a dual-boot Debian and Windoze machine; then she managed to get used to IceWM rather than a prettier desktop; and eventually she even managed to cope with vim when she needed to. Now, even after all that, she has graciously volunteered to edit this manual. I much appreciate her skill with a red pen. Chapter1.Introduction Table of Contents What is Cedar Backup? How to Get Support History ?Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.?? Linus Torvalds, at the release of Linux 2.0.8 in July of 1996. What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. There are many different backup software implementations out there in the free software and open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data to CD or DVD on a regular basis. Cedar Backup isn't for you if you want to back up your MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, a CVS or Subversion repository, or a small MySQL database, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided in the section called ?Installing Dependencies?. How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to write the Cedar Backup Users mailing list. ^[1] This is a public list for all Cedar Backup users. If you write to this list, you might get help from me, or from some other user who has experienced the same thing you have. If you know that the problem you have found constitutes a bug, or if you would like to make an enhancement request, then feel free to file a bug report in the Cedar Solutions Bug Tracking System. ^[2] If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me or to someone else who can help you. If you write the support address about a bug, a ?scrubbed? bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. ^[3] In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Tip Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. ^[4] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) ^[5] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. Since then, we have continued to use Cedar Backup for those sites, and Cedar Backup has picked up a handful of other users who have occasionally reported bugs or requested minor enhancements. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, ^[6] and updated the code to use the newly-released Python logging package ^[7] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result is the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. ^[8] -------------- ^[1] See ?SF Mailing Lists? at http://cedar-backup.sourceforge.net/. ^[2] See ?SF Bug Tracking? at http://cedar-backup.sourceforge.net/. ^[3] See Simon Tatham's excellent bug reporting tutorial: http:// www.chiark.greenend.org.uk/~sgtatham/bugs.html . ^[4] See http://www.python.org/ . ^[5] Debian's stable releases are named after characters in the Toy Story movie. ^[6] Epydoc is a Python code documentation tool. See http:// epydoc.sourceforge.net/. ^[7] See http://docs.python.org/lib/module-logging.html . ^[8] Tests are implemented using Python's unit test framework. See http:// docs.python.org/lib/module-unittest.html. Chapter2.Basic Concepts Table of Contents General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid^[9] or setgid . However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/ cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured. Warning You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called ?Encrypt Extension?. Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master ) has a CD or DVD writer on it and writes the backup to disc. The others ( clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called ?Coordination between Master and Clients? (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See Chapter5, Configuration for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple ? consolidation point?. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file ^[10] or specify absolute paths or filename patterns ^[11 ] to be excluded. You can even configure a backup ?link farm? rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a ?consolidation point? to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Note Directories ?collected? by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. Warning The store action is not supported on the Mac OS X (darwin) platform. On that platform, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the configuration value). This behavior varies slightly when the --full option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. ^[12] The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the ?check media? store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with ?CEDAR BACKUP?). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild ?this week's? disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a ?pool of one?, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult ? it mostly consists of making sure that operations happen in the right order ? but some users are suprised that it is required and want to know why Cedar Backup can't just ?take care of it for me?. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the ?managed backup?. This feature is intended for use with remote clients where cron is not available (for instance, SourceForge shell accounts). When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. ^[13] When using a new enough backup device, a new ?multisession? ISO image ^[14] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images ? which is really unusual today ? then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the ?daily? backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most ?standard? of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value ^[15] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/ checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of ?collect? step. Prior to Cedar Backup 2.0, any such integration would have been completely independent of Cedar Backup itself. The ?external? backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2.0, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process, (i.e. not collect, stage, store or purge) but can be executed by Cedar Backup when properly configured. Extension authors implement an ?action process? function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action. Hopefully, as the Cedar Backup 2.0 user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Note Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions. Developers may be interested in AppendixA, Extension Architecture Interface. -------------- ^[9] See http://en.wikipedia.org/wiki/Setuid ^[10] Analagous to .cvsignore in CVS ^[11] In terms of Python regular expressions ^[12] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. ^[13] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive. ^[14] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a ?filesystem-within-a-file? and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http:// en.wikipedia.org/wiki/ISO_image. ^[15] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1. Chapter3.Installation Table of Contents Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. If you would like to use Cedar Backup on a non-Linux system, you should install the Python source distribution along with all of the indicated dependencies. Then, please report back to the Cedar Backup Users mailing list ^[16] with information about your platform and any problems you encountered. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian ?etch? release is the first release to contain Cedar Backup.) Otherwise, you need to install from the Cedar Solutions APT data source. To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. ^[17] After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup2 cedar-backup2-doc Several of the Cedar Backup dependencies are listed as ?recommended? rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. ^[18] In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. Note The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. ^[19] You will have to manage dependencies on your own. Tip Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to ?upstream? source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python and requires version 2.5 or greater of the language. Python 2.5 was released on 19 Sep 2006, so by now most current Linux and BSD distributions should include it. You must install Python on every peer node in a pool (master or client). Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: * mkisofs * eject * mount * unmount * volname Then, you need this utility if you are writing CD media: * cdrecord or these utilities if you are writing DVD media: * growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, ^ [18] untar it: $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the ?standard? Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup2-2.0.0 $ python setup.py install Make sure that you are using Python 2.5 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. ^[20] This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option: $ python setup.py --help $ python setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. -------------- ^[16] See ?SF Mailing Lists? at http://cedar-backup.sourceforge.net/. ^[17] See ?SF Bug Tracking? at http://cedar-backup.sourceforge.net/. ^[18] See http://cedar-solutions.com/debian.html. ^[19] See http://docs.python.org/lib/module-distutils.html . ^[20] Chapter4.Command Line Tools Table of Contents Overview The cback command Introduction Syntax Switches Actions The cback-span command Introduction Syntax Switches Using cback-span Sample run Overview Cedar Backup comes with two command-line programs, the cback and cback-span commands. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need. Users that have a lot of data to back up ? more than will fit on a single CD or DVD ? can use the interactive cback-span tool to split their data between multiple discs. The cback command Introduction Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process. Syntax The cback command has the following syntax: Usage: cback [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. ^[21] Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -q, --quiet Run quietly (display no output to the screen). -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. -f, --full Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. -M, --managed Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. -N, --managed-only Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client ? but do not execute the action locally. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. -D, --diagnostics Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in the section called ?The Backup Process? (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The cback-span command Introduction Cedar Backup was designed ? and is still primarily focused ? around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc ? if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs. cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback-span command has the following syntax: Usage: cback-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback.conf) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using cback-span As discussed above, the cback-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... -------------- ^[21] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in ?surprising? behavior. Better to be definitive than confusing. Chapter5.Configuration Table of Contents Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation. Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called ?The cback command? (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called ?Configuration File Format? (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML ^[22] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. ^[23] The extensions section is always optional and can be omitted unless extensions are in use. Note Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files ?Ken? and ?ken? might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ?ken? will only match the file if it is actually on the filesystem with a lower-case ?k? as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the ?Mac Mindset?. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes a stripped config file in /etc/ cback.conf and a larger sample in /usr/share/doc/cedar-backup2/examples/ cback.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. Kenneth J. Pronovici 1.3 Sample tuesday /opt/backup/tmp backup group /usr/bin/scp -B debian local /opt/backup/collect /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/staging /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y /opt/backup/stage 7 /opt/backup/collect 0 Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: Kenneth J. Pronovici Revision 1.3 Sample Yet to be Written Config Tool (tm) The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: tuesday /opt/backup/tmp backup backup /usr/bin/scp -B /usr/bin/ssh /usr/bin/cback collect, purge cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other ?powerful? group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. ?cdrecord?. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect all machine3 remote Y backup /opt/backup/collect /usr/bin/scp /usr/bin/ssh /usr/bin/cback collect, purge The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: /opt/backup/collect daily targz .cbignore /etc .*\.conf /home/root/.profile /etc /var/log incr /opt weekly /opt/large backup .*tmp The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. ^[24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ? 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/ something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: /opt/backup/stage This is an example stage configuration section that overrides the default list of peers: /opt/backup/stage machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer ?daystrom? backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y N 15 2 weekly 1.3 The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called ?Media and Device Types? (in Chapter2, Basic Concepts). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form :scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ? 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something ?strange? might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ? 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly ? either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ? 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see the section called ?Optimized Blanking Stategy?. This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ? 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: /opt/backup/stage 7 /opt/backup/collect 0 The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than ?retain days? days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ? 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify ?extended actions? implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Warning Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed ? and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the ?database? command-line action. You have been told that this function is called ?foo.bar()?. You think of this backup as a ?collect? kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name ?database?, a module ?foo?, a function name ?bar? and an index of ?99?. This is how the hypothetical action would be configured: database foo bar 99 The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ? 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs son the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[25] To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback all You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Single machine (pool of one)? entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Note See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 06 * * * root cback purge You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[26] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Client machine? entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note Note that the master can treat itself as a ?client? peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a ?consolidation point? machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[25] To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 02 * * * root cback stage 30 04 * * * root cback store 30 06 * * * root cback purge You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[26] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Master machine? entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify in configuration. You can either leave blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations ? for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id and the device name . The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in and the SCSI id in , just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ?ATA:1,1,1?). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a ?method? indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into and leave blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into and put the emulated SCSI id (in this case, ATA:1,0,0) into . Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http:// www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/ HOWTO/ATA-RAID-HOWTO/index.html) for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.^ [27] Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users ? especially users of rewritable DVD media with its large capacity ? may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ? blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. -------------- ^[22] See http://www.xml.com/pub/a/98/10/guide0.html for a basic introduction to XML. ^[23] See the section called ?The Backup Process?, in Chapter2, Basic Concepts . ^[24] See http://docs.python.org/lib/re-syntax.html ^[25] See ?SF Bug Tracking? at http://cedar-backup.sourceforge.net/. ^[26] See the section called ?Coordination between Master and Clients? in Chapter2, Basic Concepts. ^[27] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Chapter6.Official Extensions Table of Contents System Information Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a ?broken? system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. * Currently-installed Debian packages via dpkg --get-selections * Disk partition information via fdisk -l * System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/ dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: sysinfo CedarBackup2.extend.sysinfo executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion ^[28] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. ^[29] To enable this extension, add the following section to the Cedar Backup configuration file: subversion CedarBackup2.extend.subversion executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: incr bzip2 /opt/public/svn/docs /opt/public/svn/web gzip /opt/private/svn daily The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL ^[30] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Note This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the ?root? database user will be used. An alternative is to create a separate MySQL ?backup? user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. Warning The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: mysql CedarBackup2.extend.mysql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: bzip2 Y If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: root password bzip2 Y The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL ^[31] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Warning Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: postgresql CedarBackup2.extend.postgresql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: bzip2 username Y If you decide to back up specific databases, then you would list them individually, like this: bzip2 username N db1 db2 The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style ?mbox? mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: mbox CedarBackup2.extend.mbox executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: incr gzip /home/user1/mail/greylist daily /home/user2/mail /home/user3/mail spam .*debian.* Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed ? only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/ user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[24] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. Warning If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe ? someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/ manual.html and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: encrypt CedarBackup2.extend.encrypt executeAction 301 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: gpg Backup User The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set ? the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: split CedarBackup2.extend.split executeAction 299 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: 250 MB 100 MB The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: capacity CedarBackup2.extend.capacity executeAction 299 This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: 95.5 This example configures the extension to warn if the media has fewer than 16 MB free: 16 MB The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. -------------- ^[28] See http://subversion.org ^[29] For instance, see the ?Backups? section on this page: http:// freehackers.org/~shlomif/svn-raweb-light/subversion.cgi/trunk/notes/fsfs ^[30] See http://www.mysql.com ^[31] See http://www.postgresql.org/ AppendixA.Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: database foo bar 101 In this case, the action ?database? has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: 1. Extensions may not write to stdout or stderr using functions such as print or sys.write. 2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. 3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. 4. Extensions may not return any value. 5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. 6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. 7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. 8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: /path/to/repo1 /path/to/repo2 In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. AppendixB.Dependencies Python 2.5 Version 2.5 of the Python interpreter was released on 19 Sep 2006, so most current Linux and BSD distributions should include it. +-------------------------------------------------------------------------+ | Source | URL | |-------------+-----------------------------------------------------------| |upstream |http://www.python.org | |-------------+-----------------------------------------------------------| |Debian |http://packages.debian.org/stable/python/python2.5 | |-------------+-----------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=dev-lang;name| | |=python; | |-------------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=python | |-------------+-----------------------------------------------------------| |Mac OS X |http://fink.sourceforge.net/pdb/package.php/python25 | |(fink) | | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic ?rsh? client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. +-------------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------------| |upstream|http://www.openssh.com/ | |--------+----------------------------------------------------------------| |Debian |http://packages.debian.org/stable/net/ssh | |--------+----------------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=net-misc;name= | | |openssh; | |--------+----------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=openssh | |--------+----------------------------------------------------------------| |Mac OS X|built-in | +-------------------------------------------------------------------------+ If you can't find SSH client or server packages for your system, install from the package source, using the ?upstream? link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. +-------------------------------------------------------------------------+ | Source | URL | |---------------+---------------------------------------------------------| |upstream |http://freshmeat.net/projects/mkisofs/ | |---------------+---------------------------------------------------------| |Debian |http://packages.debian.org/stable/otherosfs/mkisofs | |---------------+---------------------------------------------------------| |Gentoo |unknown | |---------------+---------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query= | | |mkisofs | |---------------+---------------------------------------------------------| |Mac OS X (fink)|http://fink.sourceforge.net/pdb/package.php/mkisofs | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. I have classified Gentoo as ?unknown? because I can't find a specific package for that platform. I think that maybe mkisofs is part of the cdrtools package (see below), but I'm not sure. Any Gentoo users want to enlighten me? cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. +-------------------------------------------------------------------------+ | Source | URL | |-------------+-----------------------------------------------------------| |upstream |http://freshmeat.net/projects/cdrecord/ | |-------------+-----------------------------------------------------------| |Debian |http://packages.debian.org/stable/otherosfs/cdrecord | |-------------+-----------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=app-cdr;name=| | |cdrtools; | |-------------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=cdrecord| |-------------+-----------------------------------------------------------| |Mac OS X |http://fink.sourceforge.net/pdb/search.php?summary=cdrecord| |(fink) | | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. +-------------------------------------------------------------------------+ | Source | URL | |------------+------------------------------------------------------------| |upstream |http://fy.chalmers.se/~appro/linux/DVD+RW/ | |------------+------------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/dvd+rw-tools | |------------+------------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=app-cdr;name= | | |dvd%2Brw-tools | |------------+------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query= | | |dvd+rw-tools | |------------+------------------------------------------------------------| |Mac OS X |http://pdb.finkproject.org/pdb/package.php/dvd+rw-tools | |(fink) | | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. +-------------------------------------------------------------------------+ | Source | URL | |-------------+-----------------------------------------------------------| |upstream |http://sourceforge.net/projects/eject | |-------------+-----------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/eject | |-------------+-----------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=sys-apps;name| | |=eject; | |-------------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=eject | |-------------+-----------------------------------------------------------| |Mac OS X |http://fink.sourceforge.net/pdb/package.php/eject | |(fink) | | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|http://freshmeat.net/projects/util-linux/ | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/base/mount | |--------+--------------------------------------------------------| |Gentoo |unknown | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=mount| |--------+--------------------------------------------------------| |Mac OS X|built-in | +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. I have classified Gentoo as ?unknown? because I can't find a specific package for that platform. It may just be that these two utilities are considered standard, and don't have an independent package of their own. Any Gentoo users want to enlighten me? I have classified Mac OS X ?built-in? because that operating system does contain a mount command. However, it isn't really compatible with Cedar Backup's idea of mount, and in fact what Cedar Backup needs is closer to the hdiutil command. However, there are other issues related to that command, which is why the store action is not really supported on Mac OS X. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. +-------------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------------| |upstream|http://freshmeat.net/projects/grepmail/ | |--------+----------------------------------------------------------------| |Debian |http://packages.debian.org/stable/mail/grepmail | |--------+----------------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=net-mail;name= | | |grepmail | |--------+----------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=grepmail | |--------+----------------------------------------------------------------| |Mac OS X|http://pdb.finkproject.org/pdb/package.php/grepmail | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. gpg The gpg command is used by the encrypt extension to encrypt files. +-------------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------------| |upstream|http://freshmeat.net/projects/gnupg/ | |--------+----------------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/gnupg | |--------+----------------------------------------------------------------| |Gentoo |http://packages.gentoo.org/packages/?category=app-crypt;name= | | |gnupg | |--------+----------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=gnupg | |--------+----------------------------------------------------------------| |Mac OS X|http://pdb.finkproject.org/pdb/package.php/gnupg | +-------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. AppendixC.Data Recovery Table of Contents Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files ?collected? from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before ?.tar.bz2?), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the ?all? flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before ?.tar?), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/ lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file ? since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known ?contact? with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using xvf: root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. Warning I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified ?all databases? in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an ?all databases? backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is ?backend-agnostic?. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http:// subversion.tigris.org/faq.html). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/ user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command. The split up files are not difficult to work with. Simply find all of the files ? which could be split between multiple discs ? and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). AppendixD.Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a ?segmenting? strategy to minimize the risk. Although the backup typically runs as root ? so that all parts of the filesystem can be backed up ? we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy ? they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a ?filter? in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master ?mickey?, and peer ?minnie?. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback --full collect /usr/bin/cback collect Of course, you would have to list the actual path to the cback executable ? exactly the one listed in the configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. AppendixE.Copyright Copyright (c) 2005-2010 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup2-2.22.0/doc/manual/ch06.html0000664000175000017500000001076512143054371021226 0ustar pronovicpronovic00000000000000Chapter6.Official Extensions

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup2.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    CedarBackup2-2.22.0/doc/manual/ch02s02.html0000664000175000017500000000630612143054371021543 0ustar pronovicpronovic00000000000000Data Recovery

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    CedarBackup2-2.22.0/doc/manual/apcs06.html0000664000175000017500000000607312143054371021557 0ustar pronovicpronovic00000000000000Recovering Data split by the Split Extension

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    CedarBackup2-2.22.0/doc/manual/ch04.html0000664000175000017500000001003012143054371021205 0ustar pronovicpronovic00000000000000Chapter4.Command Line Tools

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with two command-line programs, the cback and cback-span commands. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    Users that have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs.

    CedarBackup2-2.22.0/doc/manual/ch01s02.html0000664000175000017500000001506412143054371021543 0ustar pronovicpronovic00000000000000How to Get Support

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to write the Cedar Backup Users mailing list. [1] This is a public list for all Cedar Backup users. If you write to this list, you might get help from me, or from some other user who has experienced the same thing you have.

    If you know that the problem you have found constitutes a bug, or if you would like to make an enhancement request, then feel free to file a bug report in the Cedar Solutions Bug Tracking System. [2]

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me or to someone else who can help you. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [3]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.



    [1] See SF Mailing Lists at http://cedar-backup.sourceforge.net/.

    [2] See SF Bug Tracking at http://cedar-backup.sourceforge.net/.

    [3] See Simon Tatham's excellent bug reporting tutorial: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html .

    CedarBackup2-2.22.0/doc/manual/ch02s08.html0000664000175000017500000001017212143054371021545 0ustar pronovicpronovic00000000000000Incremental Backups

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [15] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.



    [15] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    CedarBackup2-2.22.0/doc/manual/ch06s03.html0000664000175000017500000002547212143054371021555 0ustar pronovicpronovic00000000000000MySQL Extension

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [30] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup2.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup2-2.22.0/doc/manual/ch03.html0000664000175000017500000001152412143054371021215 0ustar pronovicpronovic00000000000000Chapter3.Installation

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.



    [16] See SF Mailing Lists at http://cedar-backup.sourceforge.net/.

    CedarBackup2-2.22.0/doc/manual/pr01.html0000664000175000017500000000514212143054371021241 0ustar pronovicpronovic00000000000000Preface

    Preface

    Purpose

    This software manual has been written to document the 2.0 series of Cedar Backup, originally released in early 2005.

    CedarBackup2-2.22.0/doc/manual/apcs03.html0000664000175000017500000001254512143054371021555 0ustar pronovicpronovic00000000000000Recovering MySQL Data

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    CedarBackup2-2.22.0/doc/manual/ape.html0000664000175000017500000004312212143054371021224 0ustar pronovicpronovic00000000000000AppendixE.Copyright

    AppendixE.Copyright

    
    Copyright (c) 2005-2010
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 59 Temple
    Place, Suite 330, Boston, MA 02111-1307 USA.
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup2-2.22.0/doc/manual/apc.html0000664000175000017500000002342212143054371021223 0ustar pronovicpronovic00000000000000AppendixC.Data Recovery

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    CedarBackup2-2.22.0/doc/manual/images/0002775000175000017500000000000012143054372021037 5ustar pronovicpronovic00000000000000CedarBackup2-2.22.0/doc/manual/images/note.png0000664000175000017500000000317212143054371022512 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup2-2.22.0/doc/manual/images/warning.png0000664000175000017500000000301612143054371023207 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup2-2.22.0/doc/manual/images/info.png0000664000175000017500000000443312143054371022501 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup2-2.22.0/doc/manual/ch02s05.html0000664000175000017500000000550012143054371021541 0ustar pronovicpronovic00000000000000Coordination between Master and Clients

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    CedarBackup2-2.22.0/doc/manual/apcs04.html0000664000175000017500000001431012143054371021546 0ustar pronovicpronovic00000000000000Recovering Subversion Data

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    CedarBackup2-2.22.0/doc/manual/ch05s06.html0000664000175000017500000002726312143054371021557 0ustar pronovicpronovic00000000000000Configuring your Writer Device

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[27]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution.



    [27] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information

    CedarBackup2-2.22.0/doc/cback.10000664000175000017500000002347212122612752017447 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 2 .\" # Revision : $Id: cback.1 1027 2013-03-21 14:15:05Z pronovic $ .\" # Purpose : Manpage for cback script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback "1" "July 2010" "Cedar Backup" "Kenneth J. Pronovici" .SH NAME cback \- Local and remote backups to CD\-R/CD\-RW media .SH SYNOPSIS .B cback [\fIswitches\fR] action(s) .SH DESCRIPTION .PP The cback script provides the command\-line interface for Cedar Backup. Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. .PP Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. .PP There are two kinds of machines in a Cedar Backup pool. One machine (the \fImaster\fR) has a CD\-R or CD\-RW drive on it and is where the backup is written to disc. The others (\fIclients\fR) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are all referred to as \fIpeer\fR machines. There are four actions that take place as part of the backup process: \fIcollect\fR, \fIstage\fR, \fIstore\fR and \fIpurge\fR. Both the master and the clients execute the collect and purge actions, but only the master executes the stage and store actions. The configuration file \fI/etc/cback.conf\fR controls the actions taken during collect, stage, store and purge actions. .PP Cedar Backup also supports the concept of \fImanaged clients\fR. Managed clients have their entire backup process managed by the master via a remote shell. The same actions are run as part of the backup process, but the master controls when the actions are executed on the clients rather than the clients controlling it for themselves. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-q\fR, \fB\-\-quiet\fR Run quietly (display no output to the screen). .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. .TP \fB\-f\fR, \fB\-\-full\fR Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. .TP \fB\-M\fR, \fB\-\-managed\fR Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. .TP \fB\-N\fR, \fB\-\-managed-only\fR Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client, but do not execute the action locally. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 640 (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD recorder and its media. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH ACTIONS .TP \fBall\fR Take all normal actions (collect, stage, store, purge), in that order. .TP \fBcollect\fR Take the collect action, creating tarfiles for each directory specified in the collect section of the configuration file. .TP \fBstage\fR Take the stage action, copying tarfiles from each peer in the backup pool to the daily staging directory, based on the stage section of the configuration file. .TP \fBstore\fR Take the store action, writing the daily staging directory to disc based on the store section of the configuration file. .TP \fBpurge\fR Take the purge action, removing old and outdated files as specified in the purge section of the configuration file. .TP \fBrebuild\fR Rebuild the "this week's" disc based on the current contents of the staging directory. This option has been made available as a means to recover a disc that has been "trashed" due to a hardware or media error. .TP \fBvalidate\fR Ensure that configuration is valid, but take no other action. Validation checks that the configuration file can be found and can be parsed, and also checks for typical configuration problems, such as directories that are not writable or problems with the target SCSI device. .SH RETURN VALUES .PP Cedar Backup returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 2.5. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Error executing specified backup actions. .SH NOTES .PP The script is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, pains are taken to switch to a backup user (specified in configuration) when appropriate. .PP To use the script, you must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions may be specified, in any arbitrary order. The "all", "rebuild" or "validate" actions may not be combined with other actions. If more than one action is specified, then actions will be taken in a sensible order (generally collect, followed by stage, followed by store, followed by purge). .PP If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The "all" action never executes extended actions, however. .PP Note that there is no facility for restoring backups. It is assumed that the user can deal with copying tarfiles off disc and using them to restore missing files as needed. The user manual provides detailed intructions in Appendix C. .PP Finally, you should be aware that backups to CD or DVD can probably be read by any user which has permissions to mount the CD or DVD drive. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. You might also want to investigate the encrypt extension. .SH FILES .TP \fI/etc/cback.conf\fR - Default configuration file .TP \fI/var/log/cback.log\fR - Default log file .SH BUGS .PP There probably are bugs in this code. However, it is in active use for my own backups, and I fix problems as I notice them. If you find a bug, please report it. If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem (i.e. by sending me your configuration file). .PP Report bugs to . .SH AUTHOR Written by Kenneth J. Pronovici . .SH COPYRIGHT Copyright (c) 2004\-2010 Kenneth J. Pronovici. .br This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup2-2.22.0/doc/cback-span.10000664000175000017500000001172511416204446020406 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 2 .\" # Revision : $Id: cback-span.1 1011 2010-07-10 23:58:29Z pronovic $ .\" # Purpose : Manpage for cback-span script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback\-span "1" "July 2010" "Cedar Backup" "Kenneth J. Pronovici" .SH NAME cback\-span \- Span staged data among multiple discs .SH SYNOPSIS .B cback\-span [\fIswitches\fR] .SH DESCRIPTION .PP This is the Cedar Backup span tool. It is intended for use by people who back up more data than can fit on a single disc. It allows a user to split (span) staged data between more than one disc. It can't be a Cedar Backup extension in the usual sense because it requires user input when switching media. .PP Generally, one can run the cback\-span command with no arguments. This will start it using the default configuration file, the default log file, etc. You only need to use the switches if you need to change the default behavior. .PP This command takes most of its configuration from the Cedar Backup configuration file, specifically the store section. Then, more information is gathered from the user interactively while the command is running. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 640 (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD recorder and its media. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .SH RETURN VALUES .PP This command returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 2.5. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Other error during processing. .SH NOTES .PP Cedar Backup itself is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, this command can be run safely as any user that has read access to the Cedar Backup staging directories and write access to the CD or DVD device. .SH SEE ALSO cback(1) .SH FILES .TP \fI/etc/cback.conf\fR - Default configuration file .TP \fI/var/log/cback.log\fR - Default log file .SH BUGS Report bugs to . .SH AUTHOR Written by Kenneth J. Pronovici . .SH COPYRIGHT Copyright (c) 2007,2010 Kenneth J. Pronovici. .br This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup2-2.22.0/PKG-INFO0000664000175000017500000000250612143054372016646 0ustar pronovicpronovic00000000000000Metadata-Version: 1.0 Name: CedarBackup2 Version: 2.22.0 Summary: Implements local and remote backups to CD/DVD media. Home-page: http://cedar-backup.sourceforge.net/ Author: Kenneth J. Pronovici Author-email: pronovic@ieee.org License: Copyright (c) 2004-2011,2013 Kenneth J. Pronovici. Licensed under the GNU GPL. Description: Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. Keywords: local,remote,backup,scp,CD-R,CD-RW,DVD+R,DVD+RW Platform: Any